Messages in this thread | | | Subject | Re: CFS Bandwidth Control - Test results of cgroups tasks pinned vs unpinnede | From | Peter Zijlstra <> | Date | Mon, 12 Sep 2011 14:35:43 +0200 |
| |
On Mon, 2011-09-12 at 15:47 +0530, Srivatsa Vaddagiri wrote: > * Peter Zijlstra <a.p.zijlstra@chello.nl> [2011-09-09 14:31:02]: > > > > Machine : 16-cpus (2 Quad-core w/ HT enabled) > > > Cgroups : 5 in number (C1-C5), each having {2, 2, 4, 8, 16} tasks respectively. > > > Further, each task is placed in its own (sub-)cgroup with > > > a capped usage of 50% CPU. > > > > So that's loads: {512,512}, {512,512}, {256,256,256,256}, {128,..} and {64,..} > > Yes, with the default shares of 1024 for each cgroup. > > FWIW we did also try setting shares for each cgroup proportional to number of > tasks it has. For ex: C1's shares = 1024 * 2 = 2048, C2 = 1024 * 2 = 2048, > C3 = 4 * 1024 = 4096 etc. while /C1/C1_1, /C1/C1_2, .../C5/C5_16/ shares were > left at default of 1024 (as those sub-cgroups contain only one task). > > That does help reduce idle time by almost 50% (from 15-20% -> 6-9%)
Of course it does.. and I bet you can improve that slightly if you manage to fix some of the numerical nightmares that live in the cgroup load-balancer (Paul, care to share your WIP?)
But the initial scenario is a complete and utter fail, its impossible to schedule that sanely. Its an infeasible weight scenario with more tasks than cpus, and the added bandwidth constraints just keep changing the set requiring endless migrations to try and keep utilization from tanking.
Really, classic fail.
| |