lkml.org 
[lkml]   [2011]   [Sep]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: CFS Bandwidth Control - Test results of cgroups tasks pinned vs unpinnede
    * Peter Zijlstra <a.p.zijlstra@chello.nl> [2011-09-13 18:36:15]:
    > > Value of sched_cfs_bandwidth_slice_us was reduced from default of 5000us
    > > to 500us, which (along with reduction of min/max interval) helped cut down
    > > idle time further (3.9% -> 2.7%). I was commenting that this may not necessarily
    > > be optimal (as for example low 'sched_cfs_bandwidth_slice_us' could result
    > > in all cpus contending for cfs_b->lock very frequently).
    >
    > Right.. so this seems to suggest you're migrating a lot.

    We did do some experiments (outside of capping) to see how badly tasks
    migrate on latest tip (compared to previous kernels). The test was to
    spawn 32 cpuhogs on a 16-cpu system (place them in default cgroup -
    without any capping in place) and measure how much they bounce around.
    System had little load besides these cpu hogs.

    We saw considerably high migration count on latest tip compared to
    previous kernels. Kamalesh, can you please post the migration count
    data?

    > Also what workload are we talking about? the insane one with 5 groups of
    > weight 1024?

    We never were running the "insane" one ..we are always with proportional
    shares, the "sane" one! I missed to mention that bit in my first email
    (about the shares setup). I am attaching the test script we are using
    for your reference. Fyi, we have added additional levels to cgroup setup
    (/Level1/Level2/C1/C1_1 etc) to mimic cgroup hierarchy for VMS as
    created by libvirt.

    > Ramping up the frequency of the load-balancer and giving out smaller
    > slices is really anti-scalability.. I bet a lot of that 'reclaimed' idle
    > time is spend in system time.

    System time (in top and vmstat) does remain unchanged at 0% when
    cranking up load-balance frequency and slicing down
    sched_cfs_bandwidth_slice_us ..I guess the additional "system" time
    can't be accounted for easily by the tick-based accounting system we
    have. I agree there could be other un-observed side-effects of increased
    load-balance frequency (like workload performance) that I haven't noticed.

    - vatsa
    [unhandled content-type:application/x-sh]
    \
     
     \ /
      Last update: 2011-09-13 19:57    [from the cache]
    ©2003-2011 Jasper Spaans. Advertise on this site