lkml.org 
[lkml]   [2009]   [Aug]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [patch] x86: Rendezvous all the cpu's for MTRR/PAT init
On Tue, 18 Aug 2009 17:30:35 -0700 Suresh Siddha <suresh.b.siddha@intel.com> wrote:

> Please consider applying this patch after the clockevents bugfix posted
> yesterday http://marc.info/?l=linux-kernel&m=125054497316006&w=2
>
> thanks,
> suresh
> ---
>
> From: Suresh Siddha <suresh.b.siddha@intel.com>
> Subject: x86: Rendezvous all the cpu's for MTRR/PAT init
>
> SDM Vol 3a section titled "MTRR considerations in MP systems" specifies
> the need for synchronizing the logical cpu's while initializing/updating
> MTRR.
>
> Currently Linux kernel does the synchronization of all cpu's only when
> a single MTRR register is programmed/updated. During an AP online
> (during boot/cpu-online/resume) where we initialize all the MTRR/PAT registers,
> we don't follow this synchronization algorithm.
>
> This can lead to scenarios where during a dynamic cpu online, that logical cpu
> is initializing MTRR/PAT with cache disabled (cr0.cd=1) etc while other logical
> HT sibling continue to run (also with cache disabled because of cr0.cd=1
> on its sibling).
>
> Starting from Westmere, VMX transitions with cr0.cd=1 don't work properly
> (because of some VMX performance optimizations) and the above scenario
> (with one logical cpu doing VMX activity and another logical cpu coming online)
> can result in system crash.
>
> Fix the MTRR initialization by doing rendezvous of all the cpus. During
> boot and resume, we delay the MTRR/PAT init for APs till all the
> logical cpu's come online and the rendezvous process at the end of AP's bringup,
> will initialize the MTRR/PAT for all AP's.
>
> For dynamic single cpu online, we synchronize all the logical cpus and
> do the MTRR/PAT init on the AP that is coming online.
>
> ...
>
> @@ -880,7 +880,12 @@ void __cpuinit identify_secondary_cpu(st
> #ifdef CONFIG_X86_32
> enable_sep_cpu();
> #endif
> + /*
> + * mtrr_ap_init() uses smp_call_function. Enable interrupts.
> + */
> + local_irq_enable();
> mtrr_ap_init();
> + local_irq_disable();
> }

Ick.

It's quite unobvious (to me) that this function is reliably called with
local interrupts disabled.

If it _is_ reliably called with interrupts disabled then why is it safe
to randomly reenable them here? Why not just stop disabling interrupts
at the top level?

>
> ...
>
> +void mtrr_aps_init(void)
> +{
> + if (!use_intel())
> + return;
> +
> + /*
> + * Ideally we should hold mtrr_mutex here to avoid mtrr entries changed,
> + * but this routine will be called in cpu boot time, holding the lock
> + * breaks it. This routine is called in two cases: 1.very earily time
> + * of software resume, when there absolutely isn't mtrr entry changes;
> + * 2.cpu hotadd time. We let mtrr_add/del_page hold cpuhotplug lock to
> + * prevent mtrr entry changes
> + */

That's a tantalising little comment. What does "breaks it" mean? How
can reviewers and later code-readers possibly suggest alternative fixes
to this breakage if they aren't told what it is?

> + set_mtrr(~0U, 0, 0, 0);
> + mtrr_aps_delayed_init = 0;
> +}
> +
> +void mtrr_bp_restore(void)
> +{
> + if (!use_intel())
> + return;
> +
> + mtrr_if->set_all();
> +}
> +
>
> ...
>
> --- tip.orig/kernel/cpu.c
> +++ tip/kernel/cpu.c
> @@ -413,6 +413,14 @@ int disable_nonboot_cpus(void)
> return error;
> }
>
> +void __attribute__((weak)) arch_enable_nonboot_cpus_begin(void)
> +{
> +}
> +
> +void __attribute__((weak)) arch_enable_nonboot_cpus_end(void)
> +{
> +}

Please use __weak.




\
 
 \ /
  Last update: 2009-08-19 03:05    [W:0.056 / U:0.784 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site