lkml.org 
[lkml]   [2014]   [Jan]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v3 1/2] qspinlock: Introducing a 4-byte queue spinlock implementation
On 01/28/2014 07:20 PM, Andi Kleen wrote:
> So the 1-2 threads case is the standard case on a small
> system, isn't it? This may well cause regressions.
>

Yes, it is possible that in a lightly contended case, the queue spinlock
maybe a bit slower because of the slowpath overhead. I observed some
slight slowdown in some of the lightly contended workloads. I will run
more test in a smaller 2-socket system or even a 1-socket system to see
if there is observed regression.

>> In the extremely unlikely case that all the queue node entries are
>> used up, the current code will fall back to busy spinning without
>> waiting in a queue with warning message.
> Traditionally we had some code which could take thousands
> of locks in rare cases (e.g. all locks in a hash table or all locks of
> a big reader lock)
>
> The biggest offender was the mm for changing mmu
> notifiers, but I believe that's a mutex now.
> lglocks presumably still can do it on large enough
> systems. I wouldn't be surprised if there is
> other code which e.g. make take all locks in a table.
>
> I don't think the warning is valid and will
> likely trigger in some obscure cases.
>
> -Andi

As explained by George, the queue node is only needed when the thread is
waiting to acquire the lock. Once it gets the lock, the node can be
released and be reused.

-Longman



\
 
 \ /
  Last update: 2014-01-29 19:21    [W:0.175 / U:0.580 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site