lkml.org 
[lkml]   [1999]   [Nov]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRE: spin_unlock optimization(i386)
Date
> Maybe i'm wrong, but no. Let's see:
>
> CPU1 CPU2
> t0
> t1 'lock' read spin_lock
> store spin_lock (updated)
>
> t2 'lock' read spin_lock
> <loops>

It means nothing to lock a read. Locking just makes an operation atomic,
and reads (of aligned 32-bit quantities) are always atomic.

> 'lock' (executed by CPU1 at t1) must first block other CPUs from executing
> other 'lock' operations (on the same variable, at least), for the
> whole t1.

They couldn't anyway. Two concurrent reads of the same variable do no harm.
A read and a write would be prevent by the cache coherency logic. Only one
processor cache can have the right to write to a particular area of memory.

> The operation is atomic (serialized) system-wide.

Atomicity and serialization are completely different. The 'LOCK' prefix
makes an operation atomic. Memory barriers serialize.

> But at t2 CPU2 MUST see
> the updated value of spin_lock with no delay, or the mutual excusion will
> fail.

Correct, so this code is wrong. It used 'LOCK' when it really needed
serialization. Atomicity is not the problem here.

> So cache coherency must be forced some how.

Cache coherency is always forced by the hardware. The problem here wasn't
cache coherency or atomicity, it was serialization. This is where memory
barriers come in.

> If this is done by CPU1
> flushing the cache or by CPU2 'snooping' it, i don't know. But it MUST
> happen some how. And please note there's no "another store" on CPU2.
> Just a read. Normal cache coherency (allowing delayed store effects)
> does not apply here. You need something 'stronger'.

Correct, memory barriers.

> > Nope. Not true at all. The lock just makes the operation
> atomic, preventing
> > another processor from 'stealing' the memory out from under it.
>
> I think we're saying the same thing. If another CPU tries to
> perform the same
> 'lock', it will be delayed (stopped for a while).

Correct, but the same it true for normal writes. Two CPUs cannot write the
same area of memory at the same time (thanks to cache coherency). All the
LOCK prefix buys you is atomicity for 'read/modify/write' operations. The
LOCK prefix provides no serialization beyond what is already there (writes
are not reordered with respect to each other).

> > Nope. Again, lock just makes 'read/modify/write' operations
> atomic. No
> > more, no less.
>
> I think we don't agree on "atomic" here. "atomic" means to me that either
> other CPUs don't see any effects of the operations protected by 'lock'
> (you are "before") or they see them all (you are "after").

Correct, because the operation itself is atomic.

> You can't be
> "in the middle". That's not enough. If one CPU acquires the lock,
> other CPUs MUST become aware of that as soon as it happens, or they may
> enter the same critical section.
> Since they perform a 'lock read' to see
> if the lock is acquired, the store made by the first CPU and the
> reads made
> by others must be ordered. You need something stronger than
> Processor Order.

This 'lock read' (if it existed) wouldn't help. It could still 'lock read'
the old value. You need memory barriers.

> Your following example it's two CPUs accessing shared variables without
> locking.
>
> >
> > Definitions:
> > int valid=0;
> > int value=0;
> >
> > CPU1:
> > value=45;
> > valid=1;
> >
> > CPU2:
> > while(valid==0);
> > printf("%d\n", value);
> >
> > Now, even though the x86 doesn't reorder writes, it does
> reorder reads. So
> > the read for 'valid==0' could take place _after_ the read for
> value, even
> > though the read for value is in the next line. The processor
> can't see any
> > side effects to this reordering. But you see them!
>
> Completely agreed.

Do you see how 'LOCK' won't help you here? Atomicity is not the problem,
it's serialiation. You need to ensure that the 'value=45' becomes visible
before the 'valid=1', and all the LOCKing in the world won't do that. What
you need is a memory barrier.

> > The problem is not so much 'without lock', the problem is
> implementing the
> > 'lock' itself.
>
> If i recall my studies well, you can implement locking WITHOUT hardware
> (lock) support, just using normal reads and writes.

This is not so if reads and writes can be reordered. If you have no
guarantee that a 'write' will ever be visible on another processor, what can
you do?

> But we do have hardware support. We don't need to implement lock.
> We have a locking primitive.

Yes, but all it buys us is atomic exchanges and increments. The LOCK prefix
(do not confuse it with a real 'lock') does not do much.

> > > Note that if one CPU accesses data read-only, it does not matter if it
> > > reads *old* data, by definition.
> >
> > Not so! Consider the loop above: "while(valid==0);
> printf("%d\n", value);".
> > These are just two reads, and we are very concerned with
> reading old data in
> > the printf.
>
> True. You're right. mb()'s lets you implement a safe read of
> shared variables,
> with no need of a lock. Now i see what they're for.

Actually, they allow you to make real 'locks' of the sort we need --
mutexes.

> > Yes, but for complex reasons. Otherwise you might think the
> following is
> > just fine:
> >
> > volatile int value,valid;
> >
> > CPU1:
> > value=45;
> > valid=1;
> >
> > CPU2:
> > while(valid==0);
> > printf("%d\n", value);
> >
> > It's not obvious why you need a lock in this case. But I
> think you can now
> > see that the problem is that the reads may not actually take
> place in the
> > order you coded them.
>
> Yes. It's so obscure that I don't see it (the need for a lock). A
> mb() should
> be enough, since CPU2 access is read-only.

Yes, an mb is enough, but a bus lock is not. In fact, attempting to use a
bus lock in this case would cause the machine as a whole to blow up
(fortunately, it's impossible). Imagine if you tried to bus lock:

value=45;
valid=1;

While another CPU tried to bus lock:

valid=1;
value=45;

Oops. The first CPU would own a cache line the second CPU wants, and vice
versa. And they've both locked the bus. Yowza. Cache coherency would break
in this case.

This is why a bus lock is limited in scope to a single instruction.

> > No, you often need both locks and memory barriers, since
> they do vastly
> > different things.
>
> Now i see. There are may uses of mb()'s outside lock-protected code.
> But inside a critical section, how do you need them? Only one CPU
> executes the code, so that's really like the UP case.
> Well, let me take one example of yours (modified):

I'm sorry, I was unclear. By 'locks' here, I meant specifically bus
locks -- that is, the LOCK prefix. Generic locks include memory barriers in
them.

> lock();
> mb();
>
> read a;
> store a;
>
> mb();
> unlock();
>
> I think the mb()'s are not needed. You acquire the lock only if
> you see the
> effect of a previous unlock() (a store).

The mb's are needed. Otherwise, another processor might see the 'unlock',
but not see the 'store a', which would be a disaster. This is why user
functinos like 'pthread_mutext_unlock' contain built in memory barriers.

> Given Processor Order, if you see
> that store, you must see also the effects of previous stores, such as the
> 'store a'. So, the 'read a' can't be executed before the lock (that's what
> you're protecting us from, i think). Also, since stores are
> already in order,
> you don't need the mb() between store a and unlock().

Only on x86 is this so. This is not so on other CPUs that reorder writes
with respect to each other.

> > Yes, but without atomic operations, it's hard to let other
> processors know
> > that you've finished all your operations.
>
> A critical section *is* atomic, for the point of view of others trying to
> execute it. If some other CPU accesses the shared variables
> (read-only, say)

Yes, but how do you implement a critical section with atomic operations?

> without acquiring the lock, yes, it will see in-progress operations.
> There you may need mb()'s, to make its "view" coherent at least. Sounds
> quite a dirty hack, to me, but i've nothing against them if they really
> pay B-) ("pecunia non olet").

The typical lock/unlock functions you use to enter critical sections
generally contain both atomic operations and memory barriers. Atomic
operations to ensure that only one CPU manages to acquire the lock, and
memory barriers to ensure that reads and writes don't move 'around' the
lock/unlock points.

> CPU1:
> lock();
> value=45;
> valid=1;
> unlock();
>
> CPU2:
> ok=0;
> do {
> lock();
> ok = valid;
> unlock();
> } while (ok==0);
> printf("%d\n", value);
>
> i think it's ok. Slower, but correct. mb()'s are a better
> solution here, of
> course. Unless CPU2 updates 'value', a lock is not neccessary.
> But it does
> the job.

This only works if 'unlock' contains a memory barrier already! Otherwise
the 'printf' could still read an old value for 'value'. Reads can be
performed early on the x86.

> Yup. But that happens only with SMP and shared variable... what i was
> completely missing is their use for correct r-o access to shared vars.

A lot of people miss this. When writing user-level code, you should always
follow the POSIX memory visibility rules and not play silly games with
'volatile' and such.

DS


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:55    [W:0.062 / U:0.184 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site