lkml.org 
[lkml]   [2010]   [Apr]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: CFQ read performance regression
    On Sat, Apr 24, 2010 at 10:36:48PM +0200, Corrado Zoccolo wrote:

    [..]
    > >> Anyway, if that's the case, then we probably need to allow IO from
    > >> multiple sequential readers and keep a watch on throughput. If throughput
    > >> drops then reduce the number of parallel sequential readers. Not sure how
    > >> much of code that is but with multiple cfqq going in parallel, ioprio
    > >> logic will more or less stop working in CFQ (on multi-spindle hardware).
    > Hi Vivek,
    > I tried to implement exactly what you are proposing, see the attached patches.
    > I leverage the queue merging features to let multiple cfqqs share the
    > disk in the same timeslice.
    > I changed the queue split code to trigger on throughput drop instead
    > of on seeky pattern, so diverging queues can remain merged if they
    > have good throughput. Moreover, I measure the max bandwidth reached by
    > single queues and merged queues (you can see the values in the
    > bandwidth sysfs file).
    > If merged queues can outperform non-merged ones, the queue merging
    > code will try to opportunistically merge together queues that cannot
    > submit enough requests to fill half of the NCQ slots. I'd like to know
    > if you can see any improvements out of this on your hardware. There
    > are some magic numbers in the code, you may want to try tuning them.
    > Note that, since the opportunistic queue merging will start happening
    > only after merged queues have shown to reach higher bandwidth than
    > non-merged queues, you should use the disk for a while before trying
    > the test (and you can check sysfs), or the merging will not happen.

    Hi Corrado,

    I ran these patches and I did not see any improvement. I think the reason
    being that no cooperative queue merging took place and we did not have
    any data for throughput with coop flag on.

    #cat /sys/block/dm-3/queue/iosched/bandwidth
    230 753 0

    I think we need to implement something similiar to hw_tag detection logic
    where we allow dispatches from multiple sync-idle queues at a time and try
    to observe the BW. After certain window once we have observed the window,
    then set the system behavior accordingly.

    Kernel=2.6.34-rc5-corrado-multicfq
    DIR= /mnt/iostmnt/fio DEV= /dev/mapper/mpathe
    Workload=bsr iosched=cfq Filesz=2G bs=4K
    ==========================================================================
    job Set NR ReadBW(KB/s) MaxClat(us) WriteBW(KB/s) MaxClat(us)
    --- --- -- ------------ ----------- ------------- -----------
    bsr 1 1 126590 61448 0 0
    bsr 1 2 127849 242843 0 0
    bsr 1 4 131886 508021 0 0
    bsr 1 8 131890 398241 0 0
    bsr 1 16 129167 454244 0 0

    Thanks
    Vivek


    \
     
     \ /
      Last update: 2010-04-26 21:17    [W:0.026 / U:0.640 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site