lkml.org 
[lkml]   [1999]   [Nov]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRE: spin_unlock optimization(i386)
Date

> Thank you for your answer. It (combined with the morning light) cleared
> things up. I was simply wrong, mixing different issues (instruction
> order, cache behavior and serialization) together instead of keeping
> them separated. [rereading - I'll cut some of the quoting to keep this
> message small... yet it's big, sorry!]

It's not an easy thing to wrap your brain around.

> first of all, i was REALLY wrong in my understanding of how 'lock' works:
> it does not have to delay all activities on all CPUs, but just
> other 'lock's.
> In other words, if CPU B updates the lock variable by means of a simple
> store (without the lock prefix), this is just buggy code; reads or stores
> on different variables are not aware of lock operations, as they don't
> need to. Only 'lock' operations get serialized.

While this is true, it's not really 'all' that lock does. But it's
certainly true. If an increment, for example, is done without a lock, the
following could happen:

CPU1 CPU2
read
modify read
write modify
write

This would mean that two increments could do the same thing as one
increment, which is not good. Now note that cache coherency won't save you
here (which is why 'lock' is necessary). The memory could be arbitrated
between the two caches in the cache coherency logic at each step.

For example:

1) CPU1 does its read, CPU1's cache owns the memory.

2) CPU2 attempts to do its read, so CPU2's cache takes over the memory with
its original value.

3) CPU1 increments the value it previously read

4) CPU2 increments the value it previously read

5) CPU1 attempts to do its write, causing its cache to take over the memory
with its original value.

6) CPU2 attempts to do its write causing its cache to take over the memory
with its incremented value, only to write the same value that was in it.

So the cache coherency logic won't save you here. The problem is not cache
coherency. But note that at step 6, had CPU2 _read_ the data instead, the
cache coherency logic would have assured it of getting the new value.

> Cache coherency makes following 'lock's see the updated value of
> the lock variable at once, with no delay.

Well, yes and no. While the 'lock' prefix does make the operation atomic,
the operation as a whole could complete atomically in cache (so long as that
CPU owned the memory).

> unlock(), which does a normal store, MAY be seen delayed on other CPUs,
> causing then to waste some cycles spinning.

Any memory operation may be seen delayed on other CPUs, unless you flush
the cache. Cache coherency won't save you here because the processor itself
can reorder operations.

> > No. No. A memory barrier simply prevents the active processor from
> > 'migrating' reads or writes across the barrier.
>
> I see. So, this is really a UP matter, not SMP. mb() is used to make a
> sequence of instructions "atomic" on this CPU, as regards reordering:

UP there is no problem. The processor can see all the side-effects of its
reordering and can avoid the problems.

> <no effects of the "atomic" code are seen here>
> mb()
> <"atomic" code>
> mb()
> <ALL effects of the "atomic" code must be seen here>

This is correct. But for a single processor, it's true anyway. The
processor will not reorder reads (or reads with respect to writes) if it can
see that the effects are incorrect. The problem is, the processor cannot see
the effects on other processors.

> I said "effects" because reads can execute before, but if
> something changes
> later (store) the results of the pre-execution are discarded, and reads
> re-executed at the right time.

Correct, but that something that changes has to be invisible to the
processor, otherwise the reordering logic would have taken it into account.

> > No, that's not what a lock does either. A lock simply makes a
> > 'read-modify-write' operation atomic.
>
> Well, system-wide, anyway. Stores made by lock operations MUST be seen
> on all CPUs as soon as they complete.

This is not true. They just have to be atomic. If another processor wants
to see it, then its cache coherency logic will have to arbitrate for it. And
it's not even guaranteed to have actually been done until another store
operation takes place. (Since x86's reserve the right to delay reads and
writes).

> This requirement is mainly the
> origin of my mistakes. Stores MUST be seen, but only by the reads made
> inside another lock operation. That is: every read made inside a lock
> (but not normal reads) MUST see any previous store made inside a lock
> (but not normal stores). Lock-reads enforce coherency of lock-stores
> only. Also, lock operation are stricly serialized, ie they are not
> allowed to overlap on different CPUs.

Lock operations can overlap on different CPUs, providing they are locking
different things. If they try to lock the same thing, cache coherency will
save you.

> When a lock operation starts,
> it prevents (blocks) all CPUs trying to execute another lock. CPUs
> executing normal operation are unaffected. Since three on more
> CPUs (even out of 16) trying to execute lock at the same time is very
> unlikely, that not an issue (even two is uncommon, since the section
> proteced by a spin_lock should be short).

Nope. Not true at all. The lock just makes the operation atomic, preventing
another processor from 'stealing' the memory out from under it.

> Here, "previous/following" mean in the absolute timeline. What made me
> confused, is the (many) way you look at a sequence of instructions.
> There's the order they are (supposed to be) in the source code (C assumed
> here, of course). Compilers do reorder them, to optimize CPU work.
> This leads to an order of instruction in memory which may be different.
> This is (I hope) the same order the CPU fetches them from memory
> (but I'm not sure).

No. Not at all. The CPU can also reorder memory operations. The x86 doesn't
reorder write operations, but other CPUs even do that.

> Anyway, they may be reordered at execution time,
> and that's what mb()'s are for, to help programmers control this.

Right.

> Then there's the order they are excuted, from an absolute timeline,
> as seen by an external observer, who can see all CPU/cache/bus/memory
> activity. Last, that's the order of instructions (better, instruction
> effects) as seen by other system entities, as CPUs. That's what we
> care about on SMP, and that's what Processor Order is all about (i think).
>
> The discussion on Processor Order pointed out that we don't really care
> of real (absolute) order, but only of the order effects are seen.

Right. And that's why we use memory barriers.

> There are primitives that enforce serialization of effects, by means
> of tightening them to the absolute timeline (that's Strict Order, BTW),
> and that's what lock does. Effects of read/stores in a lock are strictly
> serialized. That's required to implement mutex (spinlocks).

Nope. Again, lock just makes 'read/modify/write' operations atomic. No
more, no less.

> So, when analyzing a piece of code (e.g. that spin_locks), first you
> should take in account compiler optimizations: use 'volatile'
> where needed,
> inline some asm (also used to generate specific instructions, such as
> 'lock'). Then, having the picture of what is generated, apply any
> "reordering rules" the CPU uses runtime. Put some mb()'s to keep things
> ordered the way you want. You get what is really executed on a single CPU.
> Now, while evaluating concurrency, apply the Processor Order model, to see
> how the effects of instructions executed by another CPU are seen on this
> CPU. If you're accessing some shared data, you need a mutex, which is
> implemented by means of an atomic test-and-set under the 'lock' prefix,
> which makes it sistem-wide atomic (serialized) with respect of other
> 'lock's on other CPUs.
> Does it make sense?

Mostly.

> > On x86 architectures, stores are always performed in order.
> However, reads
> > may be reordered and the relative order of reads and writes is not
> > guaranteed. This is what memory barriers are for.
>
> Oh! You really mean that (pseudocode, UP):
>
> a = 0; (write a)
> ...
> a = 1; (write a)
> b = a; (read a; write b)
>
> print b;
>
> brings to undefined result, could be b=1 as expected, or b=0 if read a
> gets executed before write a? Should it be rewriten:

No, no. The reordering of reads or writes is invisible to UP code. The
processor can see what affects reordering has and will not reorder such that
the effects change the results of code. The problem is, the processor cannot
take into account what other processors might be doing.

> a = 0; (write a)
> ...
> a = 1; (write a)
> mb(); /* read can't walk up this */
> b = a; (read a; write b)
>
> mb();
> print b;
>
> to be sure that 1 is printed? Print b contains a read b which may
> be executed
> before the write b, without the mb(), too. I'm pretty sure i'm wrong.
> 'read a' means "i want to see the effect of the last write on a", so
> can't be moved before the last write or its very semantic will break.

Yes, it can be moved before the last write without breaking the semantic.
The semantic doesn't care when the read or write actually takes place, just
what value you get!

> 'read b' CAN be moved before 'write a', because it's actually "i want to
> see the effect of the last write on b, who cares of a?". But i can't
> imagine any case in which you may want to prevent this from happening
> with a mb(). Unless the variables are shared, e.g. you're "talking" to
> another chip via I/O registers or shared mem. That's completely different,
> as you may want to use UC anyway, and it may also happen that a 'read'
> is not just that, but implies some status changes in you peer, so acts
> also as a write (logically), with the CPU seeing it as a "just a read"
> and reordering it and of course you don't want that.
>
> So, forgetting the I/O registers case, when do mb()'s come into play?

When you really do care not just what values you get but in what order the
reads and writes actually take place. Consider:

Definitions:
int valid=0;
int value=0;

CPU1:
value=45;
valid=1;

CPU2:
while(valid==0);
printf("%d\n", value);

Now, even though the x86 doesn't reorder writes, it does reorder reads. So
the read for 'valid==0' could take place _after_ the read for value, even
though the read for value is in the next line. The processor can't see any
side effects to this reordering. But you see them!

So what you want is:

CPU2:
while(valid==0);
mb();
printf("%d\n", value);

The CPU can't possibly see why this would be necessary, but you do. It
prevents a speculative read for 'value' taking place before the read for
'valid' causing the wrong number to be printed.

> When do you really want to keep 'read b' from executing before 'write a'?

Change CPU2 to:

while(valid==0);
[mb();]
output=value;

Without the mb();, output could be zero. With it, output must be 45.

> (assuming you wrote "write a; read b", of course). If the answer is
> "never", on UP you don't need mb()'s unless special cases, which are
> really some kind of MP, as two actors come into play.

On UP, you shouldn't need memory barriers. Unless you're dealing with other
devices on the bus.

> I can think of SMP, where if a and b are shared but you want to see them
> via your WT cache, for performance reasons (not only). If you don't use
> locks, you may want to keep your access pattern ordered. Since there's
> a kind of communcation protocol you must adhere to (logically),
> you can't just ask for the answer before you put the question, in a way.

Right.

> But when does that happen? Two (or more) CPUs accessing *r/w* shared
> data without lock (better, outside a mutex-protected critical section)?

The problem is not so much 'without lock', the problem is implementing the
'lock' itself.

> Note that if one CPU accesses data read-only, it does not matter if it
> reads *old* data, by definition.

Not so! Consider the loop above: "while(valid==0); printf("%d\n", value);".
These are just two reads, and we are very concerned with reading old data in
the printf.

> If you're taking into account time,
> or better serialization of events, you need a lock anyway.

Yes, but for complex reasons. Otherwise you might think the following is
just fine:

volatile int value,valid;

CPU1:
value=45;
valid=1;

CPU2:
while(valid==0);
printf("%d\n", value);

It's not obvious why you need a lock in this case. But I think you can now
see that the problem is that the reads may not actually take place in the
order you coded them.

> If you want
> to read coherent data (before or after a certain atomic event, call it
> a transaction) you still need some locking. No, if you don't need
> locking, you need mb()'s only if there's a concurrent r/w access to
> the shared data. If you lock, you don't need mb()'s, since only
> one CPU executes the critical section, so we fall back to the UP case.

No, you often need both locks and memory barriers, since they do vastly
different things.

> You just need all changes you make from your critical section to be
> committed (visible for others) as a whole, and no only part of them.

Yes, but without atomic operations, it's hard to let other processors know
that you've finished all your operations.

> As we have already seen, using lock, thanks to Processor Order,
> this holds. No mb() required, with lock.

Not so, because even if processor 1 executes a locked operation, that
doesn't mean that processor 2 does, so reads could be reordered around the
read that detects the lock. You still need memory barriers. (If you don't
see this, look at my examples above and see if there's any way a LOCK will
help you there. The problem is totally one of ordering. Ordering and
atomicity are different issues.)

> So, either the semantic of read is broken, and you need mb() almost every-
> where, even to decrement a counter (which of course i don't believe),
> or mb() are very seldom necessary, just for r/w access to shared variables
> on (S)MP OUTSIDE a critical section (no lock).

Nope. You missed the third alternative -- mb() are necessary any place
where something outside the processor could cause its heuristics to think a
reordering is safe when it isn't.

> Does it mean that every read will always see the right value, never an old
> one? Isn't it Strict Order instead of Processor Order?

Every read will always see the right value on UP, whether or not the read
actually takes place when you think it does. Because of this, you have to be
very careful when there are circumstances outside that single processor's
control. It becomes too smart for its own good.

DS


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:55    [W:0.059 / U:0.252 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site