lkml.org 
[lkml]   [2018]   [Nov]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v1 2/2] x86/hyperv: make HvNotifyLongSpinWait hypercall
From
Date
On 10/31/2018 11:20 PM, Yi Sun wrote:
> On 18-10-31 18:15:39, Peter Zijlstra wrote:
>> On Wed, Oct 31, 2018 at 11:07:22AM -0400, Waiman Long wrote:
>>> On 10/31/2018 10:10 AM, Peter Zijlstra wrote:
>>>> On Wed, Oct 31, 2018 at 09:54:17AM +0800, Yi Sun wrote:
>>>>> On 18-10-23 17:33:28, Yi Sun wrote:
>>>>>> On 18-10-23 10:51:27, Peter Zijlstra wrote:
>>>>>>> Can you try and explain why vcpu_is_preempted() doesn't work for you?
>>>>>> I thought HvSpinWaitInfo is used to notify hypervisor the spin number
>>>>>> which is different with vcpu_is_preempted. So I did not consider
>>>>>> vcpu_is_preempted.
>>>>>>
>>>>>> But HvSpinWaitInfo is a quite simple function and could be combined
>>>>>> with vcpu_is_preempted together. So I think it is OK to use
>>>>>> vcpu_is_preempted to make codes clean. I will have a try.
>>>>> After checking codes, there is one issue to call vcpu_is_preempted.
>>>>> There are two spin loops in qspinlock_paravirt.h. One loop in
>>>>> 'pv_wait_node' calls vcpu_is_preempted. But another loop in
>>>>> 'pv_wait_head_or_lock' does not call vcpu_is_preempted. It also does
>>>>> not call any other ops of 'pv_lock_ops' in the loop. So I am afraid
>>>>> we have to add one more ops in 'pv_lock_ops' to do this.
>>>> Why? Would not something like the below cure that? Waiman, can you have
>>>> a look at this; I always forget how that paravirt crud works.
>>> There are two major reasons why the vcpu_is_preempt() test isn't done at
>>> pv_wait_head_or_lock(). First of all, we may not have a valid prev
>>> pointer after all if it is the first one to enter the queue while the
>>> lock is busy. Secondly, because of lock stealing, the cpu number pointed
>>> by a valid prev pointer may not be the actual cpu that is currently
>>> holding the lock. Another minor reason is that we want to minimize the
>>> lock transfer latency and so don't want to sleep too early while waiting
>>> at the queue head.
>> So Yi, are you actually seeing a problem? If so, can you give details?
> Where does the patch come from? I cannot find it through google.
>
> Per Waiman's comment, it seems not suitable to call vcpu_is_preempted()
> in pv_wait_head_or_lock(). So, we cannot make HvSpinWaitInfo notification
> through vcpu_is_preempted() for such case. Based on that, I suggest to
> add one more callback function in pv_lock_ops.

I am hesitant to add any additional check at the spinning loop in
pv_wait_head_or_lock() especially one that is a hypercall or a callback
that will take time to execute. The testing that I had done in the past
indicated that it would slow down locking performance especially if the
VM wasn't overcommitted at all.

Any additional slack in pv_wait_node() can be mitigated by the lock
stealing that can happen. Slack in pv_wait_head_or_lock(), on the other
hand, will certainly increase the lock transfer latency and impact
performance. So you need performance data to show that it is worthwhile
to do so.

As for performance test, the kernel has a builtin locktorture test if
you configured it in. So show us the performance data with and without
the patch.

Cheers,
Longman

\
 
 \ /
  Last update: 2018-11-01 13:59    [W:0.139 / U:0.660 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site