Messages in this thread | | | Date | Tue, 13 Sep 2011 21:51:19 +0530 | From | Srivatsa Vaddagiri <> | Subject | Re: CFS Bandwidth Control - Test results of cgroups tasks pinned vs unpinnede |
| |
* Peter Zijlstra <a.p.zijlstra@chello.nl> [2011-09-13 16:07:28]:
> > > > This is perhaps not optimal (as it may lead to more lock contentions), but > > > > something to note for those who care for both capping and utilization in > > > > equal measure! > > > > > > You meant lock inversion, which leads to more idle time :-) > > > > I think 'cfs_b->lock' contention would go up significantly when reducing > > sysctl_sched_cfs_bandwidth_slice, while for something like 'balancing' lock > > (taken with SD_SERIALIZE set and more frequently when tuning down > > max_interval?), yes it may increase idle time! Did you have any other > > lock in mind when speaking of inversion? > > I can't read it seems.. I thought you were talking about increasing the > period,
Mm ..I brought up the increased lock contention with reference to this experimental result that I posted earlier:
> Tuning min_interval and max_interval of various sched_domains to 1 > and also setting sched_cfs_bandwidth_slice_us to 500 does cut down idle > time further to 2.7%
Value of sched_cfs_bandwidth_slice_us was reduced from default of 5000us to 500us, which (along with reduction of min/max interval) helped cut down idle time further (3.9% -> 2.7%). I was commenting that this may not necessarily be optimal (as for example low 'sched_cfs_bandwidth_slice_us' could result in all cpus contending for cfs_b->lock very frequently).
> which increases the time you force a task to sleep that's holding locks etc..
Ideally all tasks should get capped at the same time, given that there is a global pool from which everyone pulls bandwidth? So while one vcpu/task (holding a lock) gets capped, other vcpus/tasks (that may want the same lock) should ideally not be running for long after that, avoiding lock inversion related problems you point out.
I guess that we may still run into that with current implementation .. Basically global pool may have zero runtime left for current period, forcing a vcpu/task to be throttled, while there is surplus runtime in per-cpu pools, allowing some sibling vcpus/tasks to run for wee bit more, leading to lock-inversion related problems (more idling). That makes me think we can improve directed yield->capping interaction. Essentially when the target task of directed yield is capped, can the "yielding" task donate some of its bandwidth?
- vatsa
| |