Messages in this thread | | | Date | Mon, 16 Nov 2009 17:18:27 -0500 | From | Vivek Goyal <> | Subject | Re: [RFC] Block IO Controller V2 - some results |
| |
On Mon, Nov 16, 2009 at 03:51:00PM -0500, Alan D. Brunelle wrote:
[..] > :::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: > > The next thing to look at is to see what the "penalty" is for the > additional code: see how much bandwidth we lose for the capability > added. Here we see the sum of the system's throughput for the various > tests: > > ---- ---- - ----------- ----------- ----------- ----------- > Mode RdWr N base ioc off ioc no idle ioc idle > ---- ---- - ----------- ----------- ----------- ----------- > rnd rd 2 17.3 17.1 9.4 9.1 > rnd rd 4 27.1 27.1 8.1 8.2 > rnd rd 8 37.1 37.1 6.8 7.1 >
Hi Alan,
This seems to be the most notable result in terms of performance degradation.
I ran two random readers on a locally attached SATA disk. There in fact I gain in terms of performance because we perform less number of seeks now as we allocate a continous slice to one group and then move onto next group.
But in your setup it looks like there is a striped set of disks and seek cost is less and waiting per group for sync-noidle workload is hurting instead.
One simple way to test that would be to set slice_idle=0 so that CFQ does not try to do any idling at all. Can you please re-run above test. This will help in figuring out whether above performance regression is coming from idling on sync-noidle workload group per cgroup or not.
Above numbers are in what units?
Thanks Vivek
| |