lkml.org 
[lkml]   [2007]   [Oct]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH RFC 3/9] RCU: Preemptible RCU
    On Mon, Oct 01, 2007 at 11:44:25AM -0700, Davide Libenzi wrote:
    > On Sun, 30 Sep 2007, Paul E. McKenney wrote:
    >
    > > On Sun, Sep 30, 2007 at 04:02:09PM -0700, Davide Libenzi wrote:
    > > > On Sun, 30 Sep 2007, Oleg Nesterov wrote:
    > > >
    > > > > Ah, but I asked the different question. We must see CPU 1's stores by
    > > > > definition, but what about CPU 0's stores (which could be seen by CPU 1)?
    > > > >
    > > > > Let's take a "real life" example,
    > > > >
    > > > > A = B = X = 0;
    > > > > P = Q = &A;
    > > > >
    > > > > CPU_0 CPU_1 CPU_2
    > > > >
    > > > > P = &B; *P = 1; if (X) {
    > > > > wmb(); rmb();
    > > > > X = 1; BUG_ON(*P != 1 && *Q != 1);
    > > > > }
    > > > >
    > > > > So, is it possible that CPU_1 sees P == &B, but CPU_2 sees P == &A ?
    > > >
    > > > That can't be. CPU_2 sees X=1, that happened after (or same time at most -
    > > > from a cache inv. POV) to *P=1, that must have happened after P=&B (in
    > > > order for *P to assign B). So P=&B happened, from a pure time POV, before
    > > > the rmb(), and the rmb() should guarantee that CPU_2 sees P=&B too.
    > >
    > > Actually, CPU designers have to go quite a ways out of their way to
    > > prevent this BUG_ON from happening. One way that it would happen
    > > naturally would be if the cache line containing P were owned by CPU 2,
    > > and if CPUs 0 and 1 shared a store buffer that they both snooped. So,
    > > here is what could happen given careless or sadistic CPU designers:
    >
    > Ohh, I misinterpreted that rmb(), sorry. Somehow I gave it for granted
    > that it was a cross-CPU sync point (ala read_barrier_depends). If that's a
    > local CPU load ordering only, things are different, clearly. But ...
    >
    > > o CPU 0 stores &B to P, but misses the cache, so puts the
    > > result in the store buffer. This means that only CPUs 0 and 1
    > > can see it.
    > >
    > > o CPU 1 fetches P, and sees &B, so stores a 1 to B. Again,
    > > this value for P is visible only to CPUs 0 and 1.
    > >
    > > o CPU 1 executes a wmb(), which forces CPU 1's stores to happen
    > > in order. But it does nothing about CPU 0's stores, nor about CPU
    > > 1's loads, for that matter (and the only reason that POWER ends
    > > up working the way you would like is because wmb() turns into
    > > "sync" rather than the "eieio" instruction that would have been
    > > used for smp_wmb() -- which is maybe what Oleg was thinking of,
    > > but happened to abbreviate. If my analysis is buggy, Anton and
    > > Paulus will no doubt correct me...)
    >
    > If a store buffer is shared between CPU_0 and CPU_1, it is very likely
    > that a sync done on CPU_1 is going to sync even CPU_0 stores that are
    > held in the buffer at the time of CPU_1's sync.

    That would indeed be one approach that CPU designers could take to
    avoid being careless or sadistic. ;-)

    Another approach would be to make CPU 1 refrain from snooping CPU 0's
    entries in the shared store queue.

    Thanx, Paul
    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

    \
     
     \ /
      Last update: 2007-10-01 21:25    [W:0.024 / U:1.288 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site