lkml.org 
[lkml]   [2009]   [Jul]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH -tip] x86: atomic64: inline atomic64_read()


Btw, it's entirely possible that we could have a faster "atomic64_read()"
if we have some guarantees about the behavior of the counter.

For example, let's assume that the counter is known to be monotonic: in
that case, we could do a 64-bit read with something like

u64 atomic64_read_monotonic(atomic64_t *p)
{
unsigned int last = read_high_word(p);
do {
lfence;
low = read_low_word(p);
high = last;
lfence;
last = read_high_word;
} while (last != high)
return ((u64)high << 32) | low;
}

which is not necessarily all that much faster than the cmpxchg8b (the two
lfence's aren't going to be cheap), but keeping the cacheline in a shared
state might be a win.

Here, the "monotonic" part is only important because the above would not
work in case the counter switches back and forth, ie if the value ever
does an atomic increment and then an atomic decrement like this:

0x0ffffffff -> 0x100000000 -> 0x0ffffffff

then the above read logic might see a "stable" high word of 0 (before and
after), and a low word of 0 (in the middle), and think that the counter
really was 0 at one point.

But if it's a strictly monotonic counter, or has some other stability
guarantees (the way we have certain stability guarantees on PTE's, for
example: we know that the high bits can only change if the low bits
changed the present bit), you can sometimes do tricks like the above.

Do we actually _have_ any performance-critical 64-bit counters that have
monotonicity guarantees? I have no idea. I'm just throwing out the notion.

Linus


\
 
 \ /
  Last update: 2009-10-18 23:28    [W:0.175 / U:0.288 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site