lkml.org 
[lkml]   [2012]   [Jan]   [31]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: smp: Start up non-boot CPUs asynchronously

* Arjan van de Ven <arjan@infradead.org> wrote:

> >From 3700e391ab2841a9f9241e4e31a6281aa59be5f1 Mon Sep 17 00:00:00 2001
> From: Arjan van de Ven <arjan@linux.intel.com>
> Date: Mon, 30 Jan 2012 20:44:51 -0800
> Subject: [PATCH] smp: Start up non-boot CPUs asynchronously
>
> The starting of the "not first" CPUs actually takes a lot of
> boot time of the kernel... upto "minutes" on some of the
> bigger SGI boxes. Right now, this is a fully sequential
> operation with the rest of the kernel boot.

Yeah.

> This patch turns this bringup of the other cpus into an
> asynchronous operation, saving significant kernel boot time
> (40% on my laptop!!). Basically now CPUs get brought up in
> parallel to disk enumeration, graphic mode bringup etc etc
> etc.

Very nice!

> Note that the implementation in this patch still waits for all
> CPUs to be brought up before starting userspace; I would love
> to remove that restriction over time (technically that is
> simple), but that becomes then a change in behavior... I'd
> like to see more discussion on that being a good idea before I
> write that patch.

Yeah, it's a good idea to be conservative with that - most of
the silent assumptions will be on the kernel init side anyway
and we want to map those out first, without any userspace
variance mixed in.

I'd expect this patch to eventually break stuff in the kernel -
we'll fix any kernel bugs that get uncovered, and we can move on
to make things more parallel once that process has stabilized.

> Second note: We add a small delay between the bring up of
> cpus, this is needed to actually get a boot time improvement.
> If we bring up CPUs straight back-to-back, we hog the cpu
> hotplug lock for write, and that lock is used everywhere
> during initialization for read. By adding a small delay, we
> allow those tasks to make progress.

> +void __init async_cpu_up(void *data, async_cookie_t cookie)
> +{
> + unsigned long nr = (unsigned long) data;
> + /*
> + * we can only up one cpu at a time, due to the hotplug lock;
> + * it's better to wait for all earlier CPUs to be done before
> + * us so that the bring up order is predictable.
> + */
> + async_synchronize_cookie(cookie);
> + /*
> + * wait a little bit of time between cpus, to allow
> + * the kernel boot to not get stuck for a long time
> + * on the hotplug lock. We wait longer for the first
> + * CPU since many of the early kernel init code is
> + * of the hotplug-lock using type.
> + */
> + if (nr < 2)
> + msleep(100);
> + else
> + msleep(5);

Hm, the limits here seem way too ad-hoc and rigid to me.

The bigger worry is that it makes the asynchronity of the boot
process very timing dependent, 'hiding' a lot of early code on
faster boxes and only interleaving the execution on slower
boxes. But slower boxes are harder to debug!

The real fix would be to make the init code depend less on each
other, i.e. have less hotplug lock dependencies. Or, if it's
such a hot lock for a good reason, why does spinning on it slow
down the boot process? It really shouldnt.

So i think this bit is not a good idea. Lets just be fully
parallel and profile early execution via 'perf kvm' or so and
figure out where the hotplug lock overhead comes from?

Thanks,

Ingo


\
 
 \ /
  Last update: 2012-01-31 13:55    [W:0.090 / U:0.040 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site