lkml.org 
[lkml]   [2006]   [Nov]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [patch] cpufreq: mark cpufreq_tsc() as core_initcall_sync
    On Tue, 21 Nov 2006, Paul E. McKenney wrote:

    > On Tue, Nov 21, 2006 at 07:44:20PM +0300, Oleg Nesterov wrote:
    > > On 11/20, Paul E. McKenney wrote:
    > > >
    > > > On Mon, Nov 20, 2006 at 09:57:12PM +0300, Oleg Nesterov wrote:
    > > > > >
    > > > > So, if we have global A == B == 0,
    > > > >
    > > > > CPU_0 CPU_1
    > > > >
    > > > > A = 1; B = 2;
    > > > > mb(); mb();
    > > > > b = B; a = A;
    > > > >
    > > > > It could happen that a == b == 0, yes? Isn't this contradicts with definition
    > > > > of mb?
    > > >
    > > > It can and does happen. -Which- definition of mb()? ;-)
    > >
    > > I had a somewhat similar understanding before this discussion
    > >
    > > [PATCH] Fix RCU race in access of nohz_cpu_mask
    > > http://marc.theaimsgroup.com/?t=113378060600003
    > >
    > > Semantics of smp_mb() [was : Re: [PATCH] Fix RCU race in access of nohz_cpu_mask ]
    > > http://marc.theaimsgroup.com/?t=113432312600001
    > >
    > > Could you please explain me again why that fix was correct? What we have now is:
    > >
    > > CPU_0 CPU_1
    > > rcu_start_batch: stop_hz_timer:
    > >
    > > rcp->cur++; STORE nohz_cpu_mask |= cpu
    > >
    > > smp_mb(); mb(); // missed actually
    > >
    > > ->cpumask = ~nohz_cpu_mask; LOAD if (rcu_pending()) // reads rcp->cur
    > > nohz_cpu_mask &= ~cpu
    > >
    > > So, it is possible that CPU_0 reads an empty nohz_cpu_mask and starts a grace
    > > period with CPU_1 included in rcp->cpumask. CPU_1 in turn reads an old value
    > > of rcp->cur (so rcu_pending() returns 0) and becomes CPU_IDLE.
    >
    > At this point, I am not certain that it is in fact correct. :-/
    >
    > > Take another patch,
    > >
    > > Re: Oops on 2.6.18
    > > http://marc.theaimsgroup.com/?l=linux-kernel&m=116266392016286
    > >
    > > switch_uid: __sigqueue_alloc:
    > >
    > > STORE 'new_user' to ->user STORE "locked" to ->siglock
    > >
    > > mb(); "mb()"; // sort of, wrt loads/stores above
    > >
    > > LOAD ->siglock LOAD ->siglock
    > >
    > > Agian, it is possible that switch_uid() doesn't notice that ->siglock is locked
    > > and frees ->user. __sigqueue_alloc() in turn reads an old (freed) value of ->user
    > > and does get_uid() on it.
    >
    > Ditto.

    > > Paul, Alan, in case it was not clear: I am not arguing, just trying to
    > > understand, and I appreciate very much your time and your explanations.
    >
    > Either way, we clearly need better definitions of what the memory barriers
    > actually do! And I expect that we will need your help.

    Things may not be quite as bad as they appear. On many architectures the
    store-mb-load pattern will work as expected. (In fact, I don't know which
    architectures it might fail on.)

    Furthermore this is a very difficult race to trigger. You couldn't force
    it to happen, for example, by adding a delay somewhere.

    Alan

    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

    \
     
     \ /
      Last update: 2006-11-21 21:29    [W:4.278 / U:0.064 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site