lkml.org 
[lkml]   [1999]   [Nov]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [patch] Re: spin_unlock optimization(i386)
Date
From
> An rmb() will prevent reads that occur before it from being
> speculatively executed after it, and vice versa. rmb() expands to asm
> volatile ("lock; addl $0,0(%esp)" : : : "memory"). Being a locked
> operation it serves as a processor barrier for both reads and writes;
> being in cache, it is fast on a PPro. The "memory" constraint makes it
> a compiler memory barrier; register operations can still be reordered
> around it in theory. That's ok.

Unfortunately the lock addl turns into as big an overhead as the rest of the
spinlock clearing

> rmb() then "movl $0,%0".

We do seem to have some cases where we have implicit synchronization points
before an unlock where a movl would work (ie would could use some kind of
__spin_unlock(). Several drivers end outl(value, lp->base+offset); spin_unlock



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:55    [W:0.347 / U:1.048 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site