lkml.org 
[lkml]   [2010]   [Jul]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    From
    SubjectRe: [PATCH 0/2] cfq-iosched: fixing RQ_NOIDLE handling.
    Date
    Corrado Zoccolo <czoccolo@gmail.com> writes:

    > Can you test the attached patch, where I also added your changes to
    > make jbd(2) to perform sync writes?

    I got new storage, so I have new numbers. I only re-ran deadline and
    vanilla cfq for the fs_mark only test. The average of 10 runs comes out
    like so:

    deadline: 571.98
    vanilla cfq: 107.42
    patched cfq: 460.9

    Mixed workload results with your suggested patch:

    fs_mark: 15.65 files/sec
    fio: 132.5 MB/s

    So, again, not looking great for the mixed workload, but the patch
    does improve the fs_mark only case. Looking at the blktrace data shows
    that the jbd2 thread preempts the fs_mark thread at all the right
    times. The only thing holding throughput back is the whole notion that
    we need to only dispatch from one queue (even though the storage is
    capable of serving both the reads and writes simultaneously).

    I added in the patch that allows the simultaneous dispatch of both reads
    and writes, and here are the results from that run:

    fs_mark: 15.975 files/sec
    fio: 132.4 MB/s

    So, it looks like that didn't help. The reason this patch doesn't come
    close to the yield patch in the mixed workload is because the yield
    patch set allows the fs_mark process to continue to issue I/O. With
    your patch, the fs_mark process does 64KB of I/O, the jbd2 thread does
    the journal commit, and then the fio process runs again. Given that the
    fs_mark process typically only uses a small fraction of its time slice,
    you end up with an unfair balance.

    Now, we still have to decide whether that's a problem that needs
    solving. I tried to gather data from the field, but I've been unable to
    conclusively say whether an application issues this sort of dependent
    I/O.

    As such, I am happy with this patch. If we see that we need something
    like the blk_yield approach, then I'm happy to resurrect that work.

    Jens, do you find that an agreeable solution? If so, you can add my
    signed-off-by and tested-by to the patch that Corrado posted.

    Cheers,
    Jeff


    \
     
     \ /
      Last update: 2010-07-13 21:41    [W:4.822 / U:0.088 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site