[lkml]   [2011]   [Mar]   [30]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
Patch in this message
Subject[PATCH] [68/275] kernel/smp.c: fix smp_call_function_many() SMP race
2.6.35-longterm review patch.  If anyone has any objections, please let me know.

From: Anton Blanchard <>

commit 6dc19899958e420a931274b94019e267e2396d3e upstream.

I noticed a failure where we hit the following WARN_ON in

if (!cpumask_test_and_clear_cpu(cpu, data->cpumask))


refs = atomic_dec_return(&data->refs);
WARN_ON(refs < 0); <-------------------------

We atomically tested and cleared our bit in the cpumask, and yet the
number of cpus left (ie refs) was 0. How can this be?

It turns out commit 54fdade1c3332391948ec43530c02c4794a38172
("generic-ipi: make struct call_function_data lockless") is at fault. It
removes locking from smp_call_function_many and in doing so creates a
rather complicated race.

The problem comes about because:

- The smp_call_function_many interrupt handler walks call_function.queue
without any locking.
- We reuse a percpu data structure in smp_call_function_many.
- We do not wait for any RCU grace period before starting the next

Imagine a scenario where CPU A does two smp_call_functions back to back,
and CPU B does an smp_call_function in between. We concentrate on how CPU
C handles the calls:


call_function.queue sees
data from CPU A on list



call_function.queue sees
(stale) CPU A on list
smp_call_function int
clears last ref on A
list_del_rcu, unlock
smp_call_function reuses
percpu *data A
data->cpumask sees and
clears bit in cpumask
might be using old or new fn!
decrements refs below 0

set data->refs (too late!)

The important thing to note is since the interrupt handler walks a
potentially stale call_function.queue without any locking, then another
cpu can view the percpu *data structure at any time, even when the owner
is in the process of initialising it.

The following test case hits the WARN_ON 100% of the time on my PowerPC
box (having 128 threads does help :)

#include <linux/module.h>
#include <linux/init.h>

#define ITERATIONS 100

static void do_nothing_ipi(void *dummy)

static void do_ipis(struct work_struct *dummy)
int i;

for (i = 0; i < ITERATIONS; i++)
smp_call_function(do_nothing_ipi, NULL, 1);

printk(KERN_DEBUG "cpu %d finished\n", smp_processor_id());

static struct work_struct work[NR_CPUS];

static int __init testcase_init(void)
int cpu;

for_each_online_cpu(cpu) {
INIT_WORK(&work[cpu], do_ipis);
schedule_work_on(cpu, &work[cpu]);

return 0;

static void __exit testcase_exit(void)

MODULE_AUTHOR("Anton Blanchard");

I tried to fix it by ordering the read and the write of ->cpumask and
->refs. In doing so I missed a critical case but Paul McKenney was able
to spot my bug thankfully :) To ensure we arent viewing previous
iterations the interrupt handler needs to read ->refs then ->cpumask then
->refs _again_.

Thanks to Milton Miller and Paul McKenney for helping to debug this issue.

[ add WARN_ON and BUG_ON, remove extra read of refs before initial read of mask that doesn't help (also noted by Peter Zijlstra), adjust comments, hopefully clarify scenario ]
[ remove excess tests]
Signed-off-by: Anton Blanchard <>
Signed-off-by: Milton Miller <>
Signed-off-by: Andi Kleen <>
Cc: Ingo Molnar <>
Cc: "Paul E. McKenney" <>
Signed-off-by: Andrew Morton <>
Signed-off-by: Linus Torvalds <>
Signed-off-by: Greg Kroah-Hartman <>

kernel/smp.c | 28 ++++++++++++++++++++++++++++
1 file changed, 28 insertions(+)

Index: linux-2.6.35.y/kernel/smp.c
--- linux-2.6.35.y.orig/kernel/smp.c 2011-03-29 22:51:42.804627906 -0700
+++ linux-2.6.35.y/kernel/smp.c 2011-03-29 23:54:08.137794201 -0700
@@ -194,6 +194,24 @@
list_for_each_entry_rcu(data, &call_function.queue, csd.list) {
int refs;

+ /*
+ * Since we walk the list without any locks, we might
+ * see an entry that was completed, removed from the
+ * list and is in the process of being reused.
+ *
+ * We must check that the cpu is in the cpumask before
+ * checking the refs, and both must be set before
+ * executing the callback on this cpu.
+ */
+ if (!cpumask_test_cpu(cpu, data->cpumask))
+ continue;
+ smp_rmb();
+ if (atomic_read(&data->refs) == 0)
+ continue;
if (!cpumask_test_and_clear_cpu(cpu, data->cpumask))

@@ -442,11 +460,21 @@

data = &__get_cpu_var(cfd_data);
+ BUG_ON(atomic_read(&data->refs) || !cpumask_empty(data->cpumask));

data->csd.func = func;
data-> = info;
cpumask_and(data->cpumask, mask, cpu_online_mask);
cpumask_clear_cpu(this_cpu, data->cpumask);
+ /*
+ * To ensure the interrupt handler gets an complete view
+ * we order the cpumask and refs writes and order the read
+ * of them in the interrupt handler. In addition we may
+ * only clear our own cpu bit from the mask.
+ */
+ smp_wmb();
atomic_set(&data->refs, cpumask_weight(data->cpumask));

raw_spin_lock_irqsave(&call_function.lock, flags);

 \ /
  Last update: 2011-03-30 23:57    [W:0.629 / U:1.908 seconds]
©2003-2018 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site