lkml.org 
[lkml]   [2010]   [Jun]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] use unfair spinlock when running on hypervisor.
On 06/01/2010 08:27 PM, Andi Kleen wrote:
> On Tue, Jun 01, 2010 at 07:52:28PM +0300, Avi Kivity wrote:
>
>> We are running everything on NUMA (since all modern machines are now NUMA).
>> At what scale do the issues become observable?
>>
> On Intel platforms it's visible starting with 4 sockets.
>

Can you recommend a benchmark that shows bad behaviour? I'll run it
with ticket spinlocks and Gleb's patch. I have a 4-way Nehalem-EX,
presumably the huge number of threads will magnify the problem even more
there.

>>>> I understand that reason and do not propose to get back to old spinlock
>>>> on physical HW! But with virtualization performance hit is unbearable.
>>>>
>>>>
>>> Extreme unfairness can be unbearable too.
>>>
>>>
>> Well, the question is what happens first. In our experience, vcpu
>> overcommit is a lot more painful. People will never see the NUMA
>> unfairness issue if they can't use kvm due to the vcpu overcommit problem.
>>
> You really have to address both, if you don't fix them both
> users will eventually into one of them and be unhappy.
>

That's definitely the long term plan. I consider Gleb's patch the first
step.

Do you have any idea how we can tackle both problems?

--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.



\
 
 \ /
  Last update: 2010-06-02 04:53    [W:0.087 / U:0.404 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site