Messages in this thread | | | Subject | Re: [RFC] Block IO Controller V2 - some results | From | "Alan D. Brunelle" <> | Date | Tue, 17 Nov 2009 12:30:07 -0500 |
| |
On Tue, 2009-11-17 at 11:40 -0500, Vivek Goyal wrote: > On Tue, Nov 17, 2009 at 05:17:53PM +0100, Corrado Zoccolo wrote: > > Hi Vivek, > > the performance drop reported by Alan was my main concern about your > > approach. Probably you should mention/document somewhere that when the > > number of groups is too large, there is large decrease in random read > > performance. > > > > Hi Corrodo, > > I thought more about it. We idle on sync-noidle group only in case of > rotational media not supporting NCQ (hw_tag = 0). So for all the fast > hardware out there (SSD and fast arrays), we should not be idling on > sync-noidle group hence should not additional idling per group. > > This is all subjected to the fact that we have done a good job in > detecting the queue depth and have updated hw_tag accordingly. > > On slower rotational hardware, where we will actually do idling on > sync-noidle per group, idling can infact help you because it will reduce > the number of seeks (As it does on my locally connected SATA disk). > > > However, we can check few things: > > * is this kernel built with HZ < 1000? The smallest idle CFQ will do > > is given by 2/HZ, so running with a small HZ will increase the impact > > of idling. > > > > On Tue, Nov 17, 2009 at 3:14 PM, Vivek Goyal <vgoyal@redhat.com> wrote: > > > Regarding the reduced throughput for random IO case, ideally we should not > > > idle on sync-noidle group on this hardware as this seems to be a fast NCQ > > > supporting hardware. But I guess we might not be detecting the queue depth > > > properly which leads to idling on per group sync-noidle workload and > > > forces the queue depth to be 1. > > > > * This can be ruled out testing my NCQ detection fix patch > > (http://groups.google.com/group/linux.kernel/browse_thread/thread/3b62f0665f0912b6/34ec9456c7da1bb7?lnk=raot) > > This will be a good patch to test here. Alan, can you also apply this > patch and see if we see any improvement.
Vivek: Do you want me to move this over to the V3 version & apply this patch, or stick w/ V2?
Thanks, Alan
| |