lkml.org 
[lkml]   [2019]   [Jul]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v3 2/5] locking/qspinlock: Refactor the qspinlock slow path
On Tue, Jul 16, 2019 at 10:53:02AM -0400, Alex Kogan wrote:
> On Jul 16, 2019, at 6:20 AM, Peter Zijlstra <peterz@infradead.org> wrote:
> >
> > On Mon, Jul 15, 2019 at 03:25:33PM -0400, Alex Kogan wrote:
> >
> >> +/*
> >> + * set_locked_empty_mcs - Try to set the spinlock value to _Q_LOCKED_VAL,
> >> + * and by doing that unlock the MCS lock when its waiting queue is empty
> >> + * @lock: Pointer to queued spinlock structure
> >> + * @val: Current value of the lock
> >> + * @node: Pointer to the MCS node of the lock holder
> >> + *
> >> + * *,*,* -> 0,0,1
> >> + */
> >> +static __always_inline bool __set_locked_empty_mcs(struct qspinlock *lock,
> >> + u32 val,
> >> + struct mcs_spinlock *node)
> >> +{
> >> + return atomic_try_cmpxchg_relaxed(&lock->val, &val, _Q_LOCKED_VAL);
> >> +}
> >
> > That name is nonsense. It should be something like:
> >
> > static __always_inline bool __try_clear_tail(…)
>
> We already have set_locked(), so I was trying to convey the fact that we are
> doing the same here, but only when the MCS chain is empty.
>
> I can use __try_clear_tail() instead.

Thing is, we go into this function with: *,0,1 and are trying to obtain
0,0,1. IOW, we're trying to clear the tail, while preserving pending and
locked.

\
 
 \ /
  Last update: 2019-07-16 17:59    [W:0.044 / U:4.504 seconds]
©2003-2018 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site