Messages in this thread | | | Date | Wed, 4 Jan 2023 22:34:54 -0500 | Subject | Re: [PATCH V2] locking/qspinlock: Optimize pending state waiting for unlock | From | Waiman Long <> |
| |
On 1/4/23 21:19, guoren@kernel.org wrote: > From: Guo Ren <guoren@linux.alibaba.com> > > When we're pending, we only care about lock value. The xchg_tail > wouldn't affect the pending state. That means the hardware thread > could stay in a sleep state and leaves the rest execution units' > resources of pipeline to other hardware threads. This situation is > the SMT scenarios in the same core. Not an entering low-power state > situation. Of course, the granularity between cores is "cacheline", > but the granularity between SMT hw threads of the same core could > be "byte" which internal LSU handles. For example, when a hw-thread > yields the resources of the core to other hw-threads, this patch > could help the hw-thread stay in the sleep state and prevent it > from being woken up by other hw-threads xchg_tail. > > Link: https://lore.kernel.org/lkml/20221224120545.262989-1-guoren@kernel.org/ > Signed-off-by: Guo Ren <guoren@linux.alibaba.com> > Signed-off-by: Guo Ren <guoren@kernel.org> > Acked-by: Waiman Long <longman@redhat.com> > Cc: Peter Zijlstra <peterz@infradead.org> > Cc: Boqun Feng <boqun.feng@gmail.com> > Cc: Will Deacon <will@kernel.org> > Cc: Ingo Molnar <mingo@redhat.com> > --- > Changes in v2: > - Add acked tag > - Optimize commit log > - Add discussion Link tag > --- > kernel/locking/qspinlock.c | 4 ++-- > 1 file changed, 2 insertions(+), 2 deletions(-) > > diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c > index 2b23378775fe..ebe6b8ec7cb3 100644 > --- a/kernel/locking/qspinlock.c > +++ b/kernel/locking/qspinlock.c > @@ -371,7 +371,7 @@ void __lockfunc queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) > /* > * We're pending, wait for the owner to go away. > * > - * 0,1,1 -> 0,1,0 > + * 0,1,1 -> *,1,0 > * > * this wait loop must be a load-acquire such that we match the > * store-release that clears the locked bit and create lock > @@ -380,7 +380,7 @@ void __lockfunc queued_spin_lock_slowpath(struct qspinlock *lock, u32 val) > * barriers. > */ > if (val & _Q_LOCKED_MASK) > - atomic_cond_read_acquire(&lock->val, !(VAL & _Q_LOCKED_MASK)); > + smp_cond_load_acquire(&lock->locked, !VAL); > > /* > * take ownership and clear the pending bit.
Yes, the new patch description looks good to me. Thank for sending the v2.
Cheers, Longman
| |