Messages in this thread | | | Subject | Re: CFS Bandwidth Control - Test results of cgroups tasks pinned vs unpinnede | From | Peter Zijlstra <> | Date | Tue, 13 Sep 2011 18:36:15 +0200 |
| |
On Tue, 2011-09-13 at 21:51 +0530, Srivatsa Vaddagiri wrote: > > I can't read it seems.. I thought you were talking about increasing the > > period, > > Mm ..I brought up the increased lock contention with reference to this > experimental result that I posted earlier: > > > Tuning min_interval and max_interval of various sched_domains to 1 > > and also setting sched_cfs_bandwidth_slice_us to 500 does cut down idle > > time further to 2.7%
Yeah, that's the not being able to read part..
> Value of sched_cfs_bandwidth_slice_us was reduced from default of 5000us > to 500us, which (along with reduction of min/max interval) helped cut down > idle time further (3.9% -> 2.7%). I was commenting that this may not necessarily > be optimal (as for example low 'sched_cfs_bandwidth_slice_us' could result > in all cpus contending for cfs_b->lock very frequently).
Right.. so this seems to suggest you're migrating a lot.
Also what workload are we talking about? the insane one with 5 groups of weight 1024?
Ramping up the frequency of the load-balancer and giving out smaller slices is really anti-scalability.. I bet a lot of that 'reclaimed' idle time is spend in system time.
| |