lkml.org 
[lkml]   [2019]   [May]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [RFC] Disable lockref on arm64
On Thu, May 2, 2019 at 4:19 PM Jayachandran Chandrasekharan Nair
<jnair@marvell.com> wrote:
>>
> I don't really see the point your are making about hardware. If you
> look at the test case, you have about 64 cores doing CAS to the same
> location. At any point one of them will succeed and the other 63 will
> fail - and in our case since cpu_relax is a nop, they sit in a tight
> loop mostly failing.

No.

My point is that the others will *not* fail, if your cache coherency acts sane.

Here's the deal: with a cmpxchg loop, no cacheline should *ever* be in
shared mode as part of the loop. Agreed? Even if the cmpxchg is done
with ldx/stx, the ldx should do a read-for-write cycle, so at no
single time will you ever have a shared cacheline.

And once one CPU gets ownership of the line, it doesn't lose it
immediately, so the next cmpxchg will *succeed*.

So at most, the *first* cmpxchg will fail (because that's the one that
was fed not by a previous cmpxchg, but by a regular load (which we'd
*like* to do as a "load-for-ownership" load, but we don't have the
interfaces to do that). But the second cmpxchg should basically always
succeed, unless something exceptional happened (maybe an interrupt,
maybe something big like that).

Ergo: if you have a case of failing cmpxchg a lot, your cache
coherency is simply bad. Your hardware people should be ashamed of
themselves for letting go of the cacheline without just letting the
next cmpxchg succeed.

Notice how there is *NO* ping-pong. Sure, the cacheline moves around,
but every time it moves around just once, a thread makes progress.
None of this "for every progrress, there are 63 threads that fail"
garbage that you're claiming is normal.

It's not normal, and it's not inevitable.

If it really happens, it's a sign of bad hardware. Just own it, and
talk to the hw people, and make sure it gets fixed in ThunderX3. Ok?

Linus

\
 
 \ /
  Last update: 2019-05-03 21:42    [W:0.173 / U:0.252 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site