lkml.org 
[lkml]   [2011]   [Jan]   [31]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
SubjectRe: call_function_many: fix list delete vs add race
From
Date
On Fri, 2011-01-28 at 18:20 -0600, Milton Miller wrote:
> Peter pointed out there was nothing preventing the list_del_rcu in
> smp_call_function_interrupt from running before the list_add_rcu in
> smp_call_function_many. Fix this by not setting refs until we have put
> the entry on the list. We can use the lock acquire and release instead
> of a wmb.

Wondering if a final sanity check makes sense. I've got a perma-spin
bug where comment apparently happened. Another CPU's diddle the mask
IPI may make this CPU do horrible things to itself as it's setting up to
IPI others with that mask.

---
kernel/smp.c | 3 +++
1 file changed, 3 insertions(+)

Index: linux-2.6.38.git/kernel/smp.c
===================================================================
--- linux-2.6.38.git.orig/kernel/smp.c
+++ linux-2.6.38.git/kernel/smp.c
@@ -490,6 +490,9 @@ void smp_call_function_many(const struct
cpumask_and(data->cpumask, mask, cpu_online_mask);
cpumask_clear_cpu(this_cpu, data->cpumask);

+ /* Did you pass me a mask that can be changed/emptied under me? */
+ BUG_ON(cpumask_empty(data->cpumask));
+
/*
* We reuse the call function data without waiting for any grace
* period after some other cpu removes it from the global queue.




\
 
 \ /
  Last update: 2011-01-31 08:23    [W:0.180 / U:26.720 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site