Messages in this thread | | | From | Paul Turner <> | Date | Thu, 14 Oct 2010 02:27:02 -0700 | Subject | Re: [PATCH v3 2/7] sched: accumulate per-cfs_rq cpu usage |
| |
On Thu, Oct 14, 2010 at 2:19 AM, Peter Zijlstra <peterz@infradead.org> wrote: > On Tue, 2010-10-12 at 13:21 +0530, Bharata B Rao wrote: >> +#ifdef CONFIG_CFS_BANDWIDTH >> + { >> + .procname = "sched_cfs_bandwidth_slice_us", >> + .data = &sysctl_sched_cfs_bandwidth_slice, >> + .maxlen = sizeof(unsigned int), >> + .mode = 0644, >> + .proc_handler = proc_dointvec_minmax, >> + .extra1 = &one, >> + }, >> +#endif > > So this is basically your scalability knob.. the larger this value less > less frequent we have to access global state, but the less parallelism > is possible due to fewer CPUs depleting the total quota, leaving nothing > for the others. >
Exactly
> I guess one could go try and play load-balancer games to try and > mitigate this by pulling this group's tasks to the CPU(s) that have move > bandwidth for that group, but balancing that against the regular > load-balancer goal of well balancing load, will undoubtedly be > 'interesting'... >
I considered this approach as an alternative previously, but I don't think it can be enacted effectively:
Since quota will likely expire in a staggered fashion you're going to get a funnel-herd effect as everything is crowded onto the cpus with remaining quota.
It's much more easily avoided by keeping the slice small enough (relative to the bandwidth period) that we're not potentially stranding a significant percentage of our quota. The potential for abuse could be eliminated/reduced here by making the slice size a constant ratio relative to the period length. This would also make possible parallelism more deterministic.
I also think versioning the quota so that it can be potentially returned and redistributed on sleep is more effective/efficient in avoiding stranded quota.
> > -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
| |