lkml.org 
[lkml]   [2013]   [Aug]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH RFC 1/2] qspinlock: Introducing a 4-byte queue spinlock implementation
On 08/01/2013 03:10 PM, Peter Zijlstra wrote:
> On Wed, Jul 31, 2013 at 10:37:10PM -0400, Waiman Long wrote:
>
> OK, so over-all I rather like the thing. It might be good to include a
> link to some MCS lock description, sadly wikipedia doesn't have an
> article on the concept :/
>
> http://www.cise.ufl.edu/tr/DOC/REP-1992-71.pdf
>
> That seems like nice (short-ish) write-up of the general algorithm.
>
>> +typedef struct qspinlock {
>> + union {
>> + struct {
>> + u8 locked; /* Bit lock */
>> + u8 reserved;
>> + u16 qcode; /* Wait queue code */
>> + };
>> + u32 qlock;
>> + };
>> +} arch_spinlock_t;
>
>> +static __always_inline void queue_spin_unlock(struct qspinlock *lock)
>> +{
>> + barrier();
>> + ACCESS_ONCE(lock->locked) = 0;
>
> Its always good to add comments with barriers..
>
>> + smp_wmb();
>> +}
>
>> +/*
>> + * The queue node structure
>> + */
>> +struct qnode {
>> + struct qnode *next;
>> + u8 wait; /* Waiting flag */
>> + u8 used; /* Used flag */
>> +#ifdef CONFIG_DEBUG_SPINLOCK
>> + u16 cpu_nr; /* CPU number */
>> + void *lock; /* Lock address */
>> +#endif
>> +};
>> +
>> +/*
>> + * The 16-bit wait queue code is divided into the following 2 fields:
>> + * Bits 0-1 : queue node index
>> + * Bits 2-15: cpu number + 1
>> + *
>> + * The current implementation will allow a maximum of (1<<14)-1 = 16383 CPUs.
>
> I haven't yet read far enough to figure out why you need the -1 thing,
> but effectively you're restricted to 15k due to this.
>

It is exactly 16k-1 not 15k
That is because CPU_CODE of 1 to 16k represents cpu 0..16k-1





\
 
 \ /
  Last update: 2013-08-01 12:41    [W:0.107 / U:0.168 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site