lkml.org 
[lkml]   [2018]   [Mar]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH for-4.17 2/2] powerpc: Remove smp_mb() from arch_spin_is_locked()
On Wed, Mar 28, 2018 at 04:25:37PM +1100, Michael Ellerman wrote:
> That was tempting, but it leaves unfixed all the other potential
> callers, both in in-tree and out-of-tree and in code that's yet to be
> written.

So I myself don't care one teeny tiny bit about out of tree code, they
get to keep their pieces :-)

> Looking today nearly all the callers are debug code, where we probably
> don't need the barrier but we also don't care about the overhead of the
> barrier.

Still, code like:

WARN_ON_ONCE(!spin_is_locked(foo));

will unconditionally emit that SYNC. So you might want to be a little
careful.

> Documenting it would definitely be good, but even then I'd be inclined
> to leave the barrier in our implementation. Matching the documented
> behaviour is one thing, but the actual real-world behaviour on well
> tested platforms (ie. x86) is more important.

By that argument you should switch your spinlock implementation to RCpc
and include that SYNC in either lock or unlock already ;-)

Ideally we'd completely eradicate the *_is_locked() crud from the
kernel, not sure how feasable that really is, but it's a good goal. At
that point the whole issue of the barrier becomes moot of course.

\
 
 \ /
  Last update: 2018-03-28 13:05    [W:0.078 / U:0.288 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site