lkml.org 
[lkml]   [2015]   [Feb]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: live kernel upgrades (was: live kernel patching design)

* Vojtech Pavlik <vojtech@suse.com> wrote:

> On Sun, Feb 22, 2015 at 03:01:48PM -0800, Andrew Morton wrote:
>
> > On Sun, 22 Feb 2015 20:13:28 +0100 (CET) Jiri Kosina <jkosina@suse.cz> wrote:
> >
> > > But if you ask the folks who are hungry for live bug
> > > patching, they wouldn't care.
> > >
> > > You mentioned "10 seconds", that's more or less equal
> > > to infinity to them.
> >
> > 10 seconds outage is unacceptable, but we're running
> > our service on a single machine with no failover. Who
> > is doing this??
>
> This is the most common argument that's raised when live
> patching is discussed. "Why do need live patching when we
> have redundancy?"

My argument is that if we start off with a latency of 10
seconds and improve that gradually, it will be good for
everyone with a clear, actionable route for even those who
cannot take a 10 seconds delay today.

Lets see the use cases:

> [...] Examples would be legacy applications which can't
> run in an active-active cluster and need to be restarted
> on failover.

Most clusters (say web frontends) can take a stoppage of a
couple of seconds.

> [...] Or trading systems, where the calculations must be
> strictly serialized and response times are counted in
> tens of microseconds.

All trading systems I'm aware of have daily maintenance
time periods that can afford at minimum of a couple of
seconds of optional maintenance latency: stock trading
systems can be maintained when there's no trading session
(which is many hours), aftermarket or global trading
systems can be maintained when the daily rollover
interested is calculated in a predetermined low activity
period.

> Another usecase is large HPC clusters, where all nodes
> have to run carefully synchronized. Once one gets behind
> in a calculation cycle, others have to wait for the
> results and the efficiency of the whole cluster goes
> down. [...]

I think calculation nodes on large HPC clusters qualify as
the specialized case that I mentioned, where the update
latency could be brought down into the 1 second range.

But I don't think calculation nodes are patched in the
typical case: you might want to patch Internet facing
frontend systems, the rest is left as undisturbed as
possible. So I'm not even sure this is a typical usecase.

In any case, there's no hard limit on how fast such a
kernel upgrade can get in principle, and the folks who care
about that latency will sure help out optimizing it and
many HPC projects are well funded.

> The value of live patching is in near zero disruption.

Latency is a good attribute of a kernel upgrade mechanism,
but it's by far not the only attribute and we should
definitely not design limitations into the approach and
hurt all the other attributes, just to optimize that single
attribute.

I.e. don't make it a single-issue project.

Thanks,

Ingo


\
 
 \ /
  Last update: 2015-02-24 11:01    [W:0.146 / U:0.508 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site