lkml.org 
[lkml]   [1999]   [Nov]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectAre SMP spinlocks safe in WB cached mem?
Having an SMP box, and noting all the SMP crash reports I did some 
light reading :) of the IA32 System Programming Manual. I came
up with a scenario for P6 SMP spinlock corruption/theft:

CPU0 has and clears spinlock SL. The write is still in cache not mem
CPU1 tries to read SL.
CPU0 sees the read and helpfully signals HITM# and passes the line.
CPU1 starts receiving the line and sets SL
CPU0 "simultaneously" sets SL in it's cache line.

Both CPUs think they have SL, and will cause nonreentrancy chaos.
Of course, this is going to be _very_ rare. But spinlocks spin,
and spinlocks are often called again. But what is "simultaneous"?
The 40-60 ns it takes to transfer the cache line--24 cpu clocks
with a 6x multiplier? The same busclock? The same CPU clock?

Ref [1]&[2] are very clear that LOCK only goes as far as cache
(L1 or L2?) and memory writes can be long delayed for write-back
cached memory. [3] describes the HITM# mechanism.

Unless HITM# also sets the sent cache line to MESI "invalid",
then the simultaneous write could happen. Most likely, it sets
shared. CPU0 can certainly read a clear SL in it's cache. And
CPU1 just got it as it's supposed to be (shared?). The chipset is
supposed to write the HITM# to DRAM (if the BX isn't throttled).
First write sets "exclusive" on itself, "invalid" on the other,
and goes to DRAM. But what if writes are "simultaneous"?

Reading between the lines of the full Chapters 7 & 9, write-back
cached DRAM has higher performance, but is not fully SMP coherent.

Fortunately, a solution is also hinted at: use write-thru or
uncachable attributes (PCD & PWT) for critical memory.

Is this correct? or have I misread something?

-- Robert author `cpuburn` http://users.ev1.net/~redelm


[1] The ia32 System Pgming Manual section 7.1.4 says:
> For the P6 family processors, if the area of memory being locked
> during a LOCK operation is cached in the processor that is performing
> the LOCK operation as write-back memory and is completely contained
> in a cache line, the processor may not assert the LOCK# signal on the
> bus. Instead, it will modify the memory location internally and allow
> it's cache coherency mechanism to insure that the operation is carried
> out atomically. This operation is called "cache locking." The cache
> coherency mechanism automatically prevents two or more processors that
> have cached the same area of memory from simultaneously modifying data
> in that area.


[2] The ia32 System Pgming Manual section 7.2.4 says:
>* For areas of memory where weak ordering is acceptable, the write
> back (WB) memory type can be chosen. Here, reads can be performed
> speculatively and writes can be buffered and combined. For this type
> of memory, cache locking is performed on atomic (locked) operations
> that do not split across cache lines, which helps to reduce the
> performance penalty associated with the use of the typical synchronization
> instructions, such as XCHG, that lock the bus during the entire
> read-modify-write operation. With the WB memory type, the XCHG
> instruction locks the cache instead of the bus if the memory access
> is contained within a cache line.


[3] The ia32 System Pgming Manual section 9.1 says:
> Beginning with the P6 family processors, if a processor detects
> (through snooping) that another processor is trying to access a
> memory location that it has modified in its cache, but has not yet
> written back to system memory, the snooping processor will signal
> the other processor (by means of the HITM# signal) that the cache
> line is held in modified state and will preform an implicit write-back
> of the modified data. The implicit write-back is transferred directly
> to the initial requesting processor and snooped by the memory controller
> to assure that system memory has been updated. Here, the processor
> with the valid data may pass the data to the other processors without
> actually writing it to system memory; however, it is the responsibility
> of the memory controller to snoop this operation and update memory.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:55    [W:0.620 / U:0.248 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site