lkml.org 
[lkml]   [2011]   [Oct]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH 1/3] x86, reboot: Use NMI instead of REBOOT_VECTOR to stop cpus
    于 2011/10/11 23:24, Don Zickus 写道:
    > A recent discussion started talking about the locking on the pstore fs
    > and how it relates to the kmsg infrastructure. We noticed it was possible
    > for userspace to r/w to the pstore fs (grabbing the locks in the process)
    > and block the panic path from r/w to the same fs.
    >
    > The reason was the cpu with the lock could be doing work while the crashing
    > cpu is panic'ing. Busting those spinlocks might cause those cpus to step
    > on each other's data. Fine, fair enough.
    >
    > It was suggested it would be nice to serialize the panic path (ie stop
    > the other cpus) and have only one cpu running. This would allow us to
    > bust the spinlocks and not worry about another cpu stepping on the data.
    >
    > Of course, smp_send_stop() does this in the panic case. kmsg_dump() would
    > have to be moved to be called after it. Easy enough.
    >
    > The only problem is on x86 the smp_send_stop() function calls the
    > REBOOT_VECTOR. Any cpu with irqs disabled (which pstore and its backend
    > ERST would do), block this IPI and thus do not stop. This makes it
    > difficult to reliably log data to the pstore fs.
    >
    > The patch below switches from the REBOOT_VECTOR to NMI (and mimics what
    > kdump does). Switching to NMI allows us to deliver the IPI when irqs are
    > disabled, increasing the reliability of this function.
    >
    > However, Andi carefully noted that on some machines this approach does not
    > work because of broken BIOSes or whatever.
    >
    > To help accomodate this, the next couple of patches will run a selftest and
    > provide a knob to disable.
    >
    > V2:
    > uses atomic ops to serialize the cpu that shuts everyone down
    >
    > Signed-off-by: Don Zickus<dzickus@redhat.com>
    > ---
    >
    > [note] this patch sits on top of another NMI infrastructure change I have
    > submitted, so the nmi registeration might not apply cleanly without that patch.
    > ---
    > arch/x86/kernel/smp.c | 58 +++++++++++++++++++++++++++++++++++++++++++++++-
    > 1 files changed, 56 insertions(+), 2 deletions(-)
    >
    > diff --git a/arch/x86/kernel/smp.c b/arch/x86/kernel/smp.c
    > index 013e7eb..7bdbf6a 100644
    > --- a/arch/x86/kernel/smp.c
    > +++ b/arch/x86/kernel/smp.c
    > @@ -28,6 +28,7 @@
    > #include<asm/mmu_context.h>
    > #include<asm/proto.h>
    > #include<asm/apic.h>
    > +#include<asm/nmi.h>
    > /*
    > * Some notes on x86 processor bugs affecting SMP operation:
    > *
    > @@ -147,6 +148,59 @@ void native_send_call_func_ipi(const struct cpumask *mask)
    > free_cpumask_var(allbutself);
    > }
    >
    > +static atomic_t stopping_cpu = ATOMIC_INIT(-1);
    > +
    > +static int smp_stop_nmi_callback(unsigned int val, struct pt_regs *regs)
    > +{
    > + /* We are registered on stopping cpu too, avoid spurious NMI */
    > + if (raw_smp_processor_id() == atomic_read(&stopping_cpu))
    > + return NMI_HANDLED;
    > +
    > + stop_this_cpu(NULL);
    > +
    > + return NMI_HANDLED;
    > +}
    > +
    > +static void native_nmi_stop_other_cpus(int wait)
    > +{
    > + unsigned long flags;
    > + unsigned long timeout;
    > +
    > + if (reboot_force)
    > + return;
    > +
    > + /*
    > + * Use an own vector here because smp_call_function
    > + * does lots of things not suitable in a panic situation.
    > + */
    > + if (num_online_cpus()> 1) {
    > + /* did someone beat us here? */
    > + if (atomic_cmpxchg(&stopping_cpu, -1, safe_smp_processor_id() != -1))
    > + return;
    > +
    > + if (register_nmi_handler(NMI_LOCAL, smp_stop_nmi_callback,
    > + NMI_FLAG_FIRST, "smp_stop"))
    > + return; /* return what? */
    > +
    > + /* sync above data before sending NMI */
    > + wmb();
    > +
    > + apic->send_IPI_allbutself(NMI_VECTOR);
    > +
    > + /*
    > + * Don't wait longer than a second if the caller
    > + * didn't ask us to wait.
    > + */
    > + timeout = USEC_PER_SEC;
    > + while (num_online_cpus()> 1&& (wait || timeout--))
    > + udelay(1);

    In this patch and next patch, how about using the same logic in commit 74d91e3c6

    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

    \
     
     \ /
      Last update: 2011-10-12 04:37    [W:0.027 / U:61.940 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site