lkml.org 
[lkml]   [1999]   [Nov]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: spin_unlock optimization(i386)
From
Date
"Jeff V. Merkey" <jmerkey@timpanogas.com> writes:
> On Intel machines, LOCK#'s are heavier than they
> need to be becuase of all the issues Intel has with people writing
> self-modifying code (about 60% of their errata deals with this problem).

A couple of weeks ago, I went to a presentation about mutex
optimisations. It included measurements of the cost of a mutex lock
followed by unlock on Alpha, both with the usual
load-locked/store-conditional with memory barrier sequence, and with
the mutex code modified to use a load/store and no memory barriers
sequence. I don't remember the exact figures, but:

- The cost for the ll/sc version was >200 cycles on all of the EV4,
EV5, and EV6. The cost for the l/s version was <50 cycles on those
processors.

- The l/s version took less cycles on each successive generation of
processor, as you might expect. But the ll/sc version took more cycles
on EV6 than on EV5.

So compared to the competition, it doesn't seem that Intel is doing
too badly with its LOCK performance.


David Wragg

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:55    [W:0.052 / U:0.032 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site