Messages in this thread Patch in this message | | | Date | Tue, 14 Sep 2021 15:59:56 +0200 | From | Peter Zijlstra <> | Subject | Re: [PATCH 3/4] locking/rwbase: Fix rwbase_write_lock() vs __rwbase_read_lock() |
| |
On Tue, Sep 14, 2021 at 09:45:12AM +0200, Thomas Gleixner wrote: > On Thu, Sep 09 2021 at 12:59, Peter Zijlstra wrote: > > > Boqun noticed that the write-trylock sequence of load+set is broken in > > rwbase_write_lock()'s wait-loop since they're not both under the same > > wait_lock instance. > > Confused. > > lock(); A > > for (; atomic_read(readers);) { > ... > unlock(); > .. > lock(); B > } > > atomic_set(); > unlock(); A or B > > The read/set is always in the same lock instance.
I really did make a mess of things didn't I :-/ It was some intermediate state that was broken.
How's this then?
--- Subject: locking/rwbase: Extract __rwbase_write_trylock() From: Peter Zijlstra <peterz@infradead.org> Date: Thu, 09 Sep 2021 12:59:18 +0200
The code in rwbase_write_lock() is a little non-obvious vs the read+set 'trylock', extract the sequence into a helper function to clarify the code.
This also provides a single site to fix fast-path ordering.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> --- kernel/locking/rwbase_rt.c | 44 ++++++++++++++++++++++++++------------------ 1 file changed, 26 insertions(+), 18 deletions(-)
--- a/kernel/locking/rwbase_rt.c +++ b/kernel/locking/rwbase_rt.c @@ -196,6 +196,19 @@ static inline void rwbase_write_downgrad __rwbase_write_unlock(rwb, WRITER_BIAS - 1, flags); } +static inline bool __rwbase_write_trylock(struct rwbase_rt *rwb) +{ + /* Can do without CAS because we're serialized by wait_lock. */ + lockdep_assert_held(&rwb->rtmutex.wait_lock); + + if (!atomic_read(&rwb->readers)) { + atomic_set(&rwb->readers, WRITER_BIAS); + return 1; + } + + return 0; +} + static int __sched rwbase_write_lock(struct rwbase_rt *rwb, unsigned int state) { @@ -210,34 +223,30 @@ static int __sched rwbase_write_lock(str atomic_sub(READER_BIAS, &rwb->readers); raw_spin_lock_irqsave(&rtm->wait_lock, flags); - /* - * set_current_state() for rw_semaphore - * current_save_and_set_rtlock_wait_state() for rwlock - */ - rwbase_set_and_save_current_state(state); + if (__rwbase_write_trylock(rwb)) + goto out_unlock; - /* Block until all readers have left the critical section. */ - for (; atomic_read(&rwb->readers);) { + rwbase_set_and_save_current_state(state); + for (;;) { /* Optimized out for rwlocks */ if (rwbase_signal_pending_state(state, current)) { rwbase_restore_current_state(); __rwbase_write_unlock(rwb, 0, flags); return -EINTR; } + + if (__rwbase_write_trylock(rwb)) + break; + raw_spin_unlock_irqrestore(&rtm->wait_lock, flags); + rwbase_schedule(); + raw_spin_lock_irqsave(&rtm->wait_lock, flags); - /* - * Schedule and wait for the readers to leave the critical - * section. The last reader leaving it wakes the waiter. - */ - if (atomic_read(&rwb->readers) != 0) - rwbase_schedule(); set_current_state(state); - raw_spin_lock_irqsave(&rtm->wait_lock, flags); } - - atomic_set(&rwb->readers, WRITER_BIAS); rwbase_restore_current_state(); + +out_unlock: raw_spin_unlock_irqrestore(&rtm->wait_lock, flags); return 0; } @@ -253,8 +262,7 @@ static inline int rwbase_write_trylock(s atomic_sub(READER_BIAS, &rwb->readers); raw_spin_lock_irqsave(&rtm->wait_lock, flags); - if (!atomic_read(&rwb->readers)) { - atomic_set(&rwb->readers, WRITER_BIAS); + if (__rwbase_write_trylock(rwb)) { raw_spin_unlock_irqrestore(&rtm->wait_lock, flags); return 1; }
| |