lkml.org 
[lkml]   [2013]   [Aug]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH RFC v2 1/2] qspinlock: Introducing a 4-byte queue spinlock implementation
On 08/27/2013 08:09 AM, Alexander Fyodorov wrote:
>> I also thought that the x86 spinlock unlock path was an atomic add. It
>> just comes to my realization recently that this is not the case. The
>> UNLOCK_LOCK_PREFIX will be mapped to "" except for some old 32-bit x86
>> processors.
> Hmm, I didn't know that. Looking through Google found these rules for x86 memory ordering:
> * Loads are not reordered with other loads.
> * Stores are not reordered with other stores.
> * Stores are not reordered with older loads.
> So x86 memory model is rather strict and memory barrier is really not needed in the unlock path - xadd is a store and thus behaves like a memory barrier, and since only lock's owner modifies "ticket.head" the "add" instruction need not be atomic.
>
> But this is true only for x86, other architectures have more relaxed memory ordering. Maybe we should allow arch code to redefine queue_spin_unlock()? And define a version without smp_mb() for x86?

What I have been thinking is to set a flag in an architecture specific
header file to tell if the architecture need a memory barrier. The
generic code will then either do a smp_mb() or barrier() depending on
the presence or absence of the flag. I would prefer to do more in the
generic code, if possible.

Regards,
Longman


\
 
 \ /
  Last update: 2013-08-29 19:02    [W:0.157 / U:1.040 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site