lkml.org 
[lkml]   [2016]   [Feb]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 0/2] locking/mutex: Enable optimistic spinning of lock waiter
On 02/09/2016 04:44 PM, Jason Low wrote:
> On Tue, 2016-02-09 at 14:47 -0500, Waiman Long wrote:
>> This patchset is a variant of PeterZ's "locking/mutex: Avoid spinner
>> vs waiter starvation" patch. The major difference is that the
>> waiter-spinner won't enter into the OSQ used by the spinners. Instead,
>> it will spin directly on the lock in parallel with the queue head
>> of the OSQ. So there will be a bit more cacheline contention on the
>> lock cacheline, but that shouldn't cause noticeable impact on system
>> performance.
>>
>> This patchset tries to address 2 issues with Peter's patch:
>>
>> 1) Ding Tianhong still find that hanging task could happen in some cases.
>> 2) Jason Low found that there was performance regression for some AIM7
>> workloads.
> This might help address the hang that Ding reported.
>
> Performance wise, this patchset reduced AIM7 fserver throughput on the 8
> socket machine by -70%+ at 1000+ users.
>
> | fserver JPM
> -----------------------------
> baseline | ~450000
> Peter's patch | ~410000
> This patchset | ~100000
>
> My guess is that waiters spinning/acquiring the lock is less efficient,
> and this patchset further increases the chance for waiters to
> spin/acquire the lock over the fastpath optimistic spinners.
>
> Jason
>

That was just a configuration error as the CPU scaling governor wasn't
set to performance. With the performance scaling governor, the
patchset's performance was comparable to Peter's patch.

Cheers,
Longman

\
 
 \ /
  Last update: 2016-02-12 18:41    [W:0.054 / U:0.984 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site