lkml.org 
[lkml]   [2016]   [May]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: sem_lock() vs qspinlocks
On Fri, May 20, 2016 at 04:44:19PM -0400, Waiman Long wrote:
> On 05/20/2016 07:58 AM, Peter Zijlstra wrote:
> >On Thu, May 19, 2016 at 10:39:26PM -0700, Davidlohr Bueso wrote:
> >>As such, the following restores the behavior of the ticket locks and 'fixes'
> >>(or hides?) the bug in sems. Naturally incorrect approach:
> >>
> >>@@ -290,7 +290,8 @@ static void sem_wait_array(struct sem_array *sma)
> >>
> >> for (i = 0; i< sma->sem_nsems; i++) {
> >> sem = sma->sem_base + i;
> >>- spin_unlock_wait(&sem->lock);
> >>+ while (atomic_read(&sem->lock))
> >>+ cpu_relax();
> >> }
> >> ipc_smp_acquire__after_spin_is_unlocked();
> >>}
> >The actual bug is clear_pending_set_locked() not having acquire
> >semantics. And the above 'fixes' things because it will observe the old
> >pending bit or the locked bit, so it doesn't matter if the store
> >flipping them is delayed.
>
> The clear_pending_set_locked() is not the only place where the lock is set.
> If there are more than one waiter, the queuing patch will be used instead.
> The set_locked(), which is also an unordered store, will then be used to set
> the lock.

Ah yes. I didn't get that far. One case was enough :-)

\
 
 \ /
  Last update: 2016-05-20 23:01    [W:0.130 / U:0.276 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site