lkml.org 
[lkml]   [2008]   [Nov]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: RT sched: cpupri_vec lock contention with def_root_domain and no load balance
Dimitri Sivanich wrote:
> On Sat, Nov 22, 2008 at 04:18:29PM +0800, Li Zefan wrote:
>> Max Krasnyansky wrote:
>>> Dimitri Sivanich wrote:
>>>> Which is the way sched_load_balance is supposed to work. You need to set
>>>> sched_load_balance=0 for all cpusets containing any cpu you want to disable
>>>> balancing on, otherwise some balancing will happen.
>>> It won't be much of a balancing in this case because this just one cpu per
>>> domain.
>>> In other words no that's not how it supposed to work. There is code in
>>> cpu_attach_domain() that is supposed to remove redundant levels
>>> (sd_degenerate() stuff). There is an explicit check in there for numcpus == 1.
>>> btw The reason you got a different result that I did is because you have a
>>> NUMA box where is mine is UMA. I was able to reproduce the problem though by
>>> enabling multi-core scheduler. In which case I also get one redundant domain
>>> level CPU, with a single CPU in it.
>>> So we definitely need to fix this. I'll try to poke around tomorrow and figure
>>> out why redundant level is not dropped.
>>>
>> You were not using latest kernel, were you?
>>
>> There was a bug in sd degenerate code, and it has already been fixed:
>> http://lkml.org/lkml/2008/11/8/10
>
> With the above patch added, we now see the results that Max is
> showing as far as individual root domains being created with a span
> of just their own cpu when sched_load_balance is turned off.

Nice.

Max


\
 
 \ /
  Last update: 2008-11-24 22:49    [W:0.112 / U:0.396 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site