lkml.org 
[lkml]   [2011]   [Mar]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 3/4 v3] smp_call_function_many: handle concurrent clearing of mask
>>> On 15.03.11 at 20:27, Milton Miller <miltonm@bga.com> wrote:
> Mike Galbraith reported finding a lockup ("perma-spin bug") where the
> cpumask passed to smp_call_function_many was cleared by other cpu(s)
> while a cpu was preparing its call_data block, resulting in no cpu to
> clear the last ref and unlock the block.
>
> Having cpus clear their bit asynchronously could be useful on a mask of
> cpus that might have a translation context, or cpus that need a push to
> complete an rcu window.
>
> Instead of adding a BUG_ON and requiring yet another cpumask copy, just
> detect the race and handle it.
>
> Note: arch_send_call_function_ipi_mask must still handle an empty
> cpumask because the data block is globally visible before the that
> arch callback is made. And (obviously) there are no guarantees to
> which cpus are notified if the mask is changed during the call;
> only cpus that were online and had their mask bit set during the
> whole call are guaranteed to be called.
>
> Reported-by: Mike Galbraith <efault@gmx.de>
> Reported-by: Jan Beulich <JBeulich@novell.com>
> Signed-off-by: Milton Miller <miltonm@bga.com>

Acked-by: Jan Beulich <jbeulich@novell.com>

> ---
> v3: try to clarify which mask in comment
> v2: rediff for v2 of call_function_many: fix list delete vs add race
>
> The arch code not expecting the race to empty the mask is the cause
> of https://bugzilla.kernel.org/show_bug.cgi?id=23042 that Andrew pointed
> out.
>
>
> Index: common/kernel/smp.c
> ===================================================================
> --- common.orig/kernel/smp.c 2011-03-15 05:22:26.000000000 -0500
> +++ common/kernel/smp.c 2011-03-15 06:22:26.000000000 -0500
> @@ -450,7 +450,7 @@ void smp_call_function_many(const struct
> {
> struct call_function_data *data;
> unsigned long flags;
> - int cpu, next_cpu, this_cpu = smp_processor_id();
> + int refs, cpu, next_cpu, this_cpu = smp_processor_id();
>
> /*
> * Can deadlock when called with interrupts disabled.
> @@ -461,7 +461,7 @@ void smp_call_function_many(const struct
> WARN_ON_ONCE(cpu_online(this_cpu) && irqs_disabled()
> && !oops_in_progress && !early_boot_irqs_disabled);
>
> - /* So, what's a CPU they want? Ignoring this one. */
> + /* Try to fastpath. So, what's a CPU they want? Ignoring this one. */
> cpu = cpumask_first_and(mask, cpu_online_mask);
> if (cpu == this_cpu)
> cpu = cpumask_next_and(cpu, mask, cpu_online_mask);
> @@ -519,6 +519,13 @@ void smp_call_function_many(const struct
> /* We rely on the "and" being processed before the store */
> cpumask_and(data->cpumask, mask, cpu_online_mask);
> cpumask_clear_cpu(this_cpu, data->cpumask);
> + refs = cpumask_weight(data->cpumask);
> +
> + /* Some callers race with other cpus changing the passed mask */
> + if (unlikely(!refs)) {
> + csd_unlock(&data->csd);
> + return;
> + }
>
> raw_spin_lock_irqsave(&call_function.lock, flags);
> /*
> @@ -532,7 +539,7 @@ void smp_call_function_many(const struct
> * to the cpumask before this write to refs, which indicates
> * data is on the list and is ready to be processed.
> */
> - atomic_set(&data->refs, cpumask_weight(data->cpumask));
> + atomic_set(&data->refs, refs);
> raw_spin_unlock_irqrestore(&call_function.lock, flags);
>
> /*





\
 
 \ /
  Last update: 2011-03-16 08:53    [W:0.075 / U:4.412 seconds]
©2003-2018 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site