lkml.org 
[lkml]   [2011]   [Feb]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH 1/3 v2] call_function_many: fix list delete vs add race
From
Date
On Tue, 2011-02-01 at 14:00 -0800, Milton Miller wrote:
> On Tue, 1 Feb 2011 about 14:00:26 -0800, "Paul E. McKenney" wrote:

> > Starting with smp_call_function_many():
> >
> > o The check for refs is redundant:
> >
> > /* some callers might race with other cpus changing the mask */
> > if (unlikely(!refs)) {
> > csd_unlock(&data->csd);
> > return;
> > }
> >
> > The memory barriers and atomic functions in
> > generic_smp_call_function_interrupt() prevent the callback from
> > being reused before the cpumask bits have all been cleared, right?
>
> The issue is not the cpumask in the csd, but the mask passed in from the
> caller. If other cpus clear the mask between the cpumask_first and and
> cpumask_next above (where we established there were at least two cpus not
> ourself) and the cpumask_copy, then this can happen. Both Mike Galbraith
> and Jan Beulich saw this in practice (Mikes case was mm_cpumask(mm)).

Mine (and Jan's) is a flavor of one hit and fixed via copy in ia64.

http://git2.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=75c1c91cb92806f960fcd6e53d2a0c21f343081c


-Mike



\
 
 \ /
  Last update: 2011-02-02 07:25    [W:0.164 / U:0.228 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site