lkml.org 
[lkml]   [2011]   [Sep]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    SubjectRe: CFS Bandwidth Control - Test results of cgroups tasks pinned vs unpinnede
    From
    Date
    On Tue, 2011-09-13 at 21:51 +0530, Srivatsa Vaddagiri wrote:
    > > I can't read it seems.. I thought you were talking about increasing the
    > > period,
    >
    > Mm ..I brought up the increased lock contention with reference to this
    > experimental result that I posted earlier:
    >
    > > Tuning min_interval and max_interval of various sched_domains to 1
    > > and also setting sched_cfs_bandwidth_slice_us to 500 does cut down idle
    > > time further to 2.7%

    Yeah, that's the not being able to read part..

    > Value of sched_cfs_bandwidth_slice_us was reduced from default of 5000us
    > to 500us, which (along with reduction of min/max interval) helped cut down
    > idle time further (3.9% -> 2.7%). I was commenting that this may not necessarily
    > be optimal (as for example low 'sched_cfs_bandwidth_slice_us' could result
    > in all cpus contending for cfs_b->lock very frequently).

    Right.. so this seems to suggest you're migrating a lot.

    Also what workload are we talking about? the insane one with 5 groups of
    weight 1024?

    Ramping up the frequency of the load-balancer and giving out smaller
    slices is really anti-scalability.. I bet a lot of that 'reclaimed' idle
    time is spend in system time.


    \
     
     \ /
      Last update: 2011-09-13 18:39    [W:6.541 / U:0.048 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site