Messages in this thread Patch in this message |  | | Date | Mon, 27 Nov 2017 12:45:06 +0000 | From | Will Deacon <> | Subject | Re: [bisected] system hang after boot |
| |
Me again...
On Mon, Nov 27, 2017 at 11:49:47AM +0000, Will Deacon wrote: > On Wed, Nov 22, 2017 at 09:22:17PM +0100, Peter Zijlstra wrote: > > On Wed, Nov 22, 2017 at 06:26:59PM +0000, Will Deacon wrote: > > > > > Now, I can't see what the break_lock is doing here other than causing > > > problems. Is there a good reason for it, or can you just try removing it > > > altogether? Patch below. > > > > The main use is spin_is_contended(), which in turn ends up used in > > __cond_resched_lock() through spin_needbreak(). > > > > This allows better lock wait times for PREEMPT kernels on platforms > > where the lock implementation itself cannot provide 'contended' state. > > > > In that capacity the write-write race shouldn't be a problem though. > > I'm not sure why it isn't a problem: given that the break_lock variable > can read as 1 for a lock that is no longer contended and 0 for a lock that > is currently contended, then the __cond_resched_lock is likely to see a > value of 0 (i.e. spin_needbreak always return false) more often than no > since it's checked by the lock holder. > > > That said, I'd not be horribly sad to see this go, I've always found it > > to be quite the ugly hack and taking it out should provide some > > incentive for better lock implementations for the archs relying on this. > > Right, and they can always implement arch_spin_is_contended if they have > a good way to do it. > > I'll post this diff as a full patch, since it's clearly needed to get some > s390 systems booting against with 4.15.
In writing up the patch, I realised that I might be being a bit too purist about what we can actually rely on here, particularly as GENERIC_LOCKBREAK isn't widely selected. I'm still suspicious, but perhaps on the architectures that *do* select it, then the propagation of the lock_break variable between CPUs works out because of the barriers in lock()/trylock(), and that's sufficient for a "good enough" spin_is_contended() implementation.
Sebastian: could you try the diff below, please? If that fixes s390, then we can debate the merits of GENERIC_LOCKBREAK independently of fixing this issue.
Thanks,
Will
--->8
diff --git a/kernel/locking/spinlock.c b/kernel/locking/spinlock.c index 1fd1a7543cdd..0ebb253e2199 100644 --- a/kernel/locking/spinlock.c +++ b/kernel/locking/spinlock.c @@ -68,8 +68,8 @@ void __lockfunc __raw_##op##_lock(locktype##_t *lock) \ \ if (!(lock)->break_lock) \ (lock)->break_lock = 1; \ - while ((lock)->break_lock) \ - arch_##op##_relax(&lock->raw_lock); \ + \ + arch_##op##_relax(&lock->raw_lock); \ } \ (lock)->break_lock = 0; \ } \ @@ -88,8 +88,8 @@ unsigned long __lockfunc __raw_##op##_lock_irqsave(locktype##_t *lock) \ \ if (!(lock)->break_lock) \ (lock)->break_lock = 1; \ - while ((lock)->break_lock) \ - arch_##op##_relax(&lock->raw_lock); \ + \ + arch_##op##_relax(&lock->raw_lock); \ } \ (lock)->break_lock = 0; \ return flags; \
|  |