lkml.org 
[lkml]   [2012]   [Jan]   [30]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    SubjectRe: Bad SSD performance with recent kernels
    From
    Date
    On Mon, 2012-01-30 at 17:26 -0500, Vivek Goyal wrote:
    > On Mon, Jan 30, 2012 at 03:51:49PM +0100, Eric Dumazet wrote:
    > > Le lundi 30 janvier 2012 à 22:28 +0800, Wu Fengguang a écrit :
    > > > On Mon, Jan 30, 2012 at 06:31:34PM +0800, Li, Shaohua wrote:
    > > >
    > > > > Looks the 2.6.39 block plug introduces some latency here. deleting
    > > > > blk_start_plug/blk_finish_plug in generic_file_aio_read seems
    > > > > workaround
    > > > > the issue. The plug seems not good for sequential IO, because readahead
    > > > > code already has plug and has fine grained control.
    > > >
    > > > Why not remove the generic_file_aio_read() plug completely? It
    > > > actually prevents unplugging immediately after the readahead IO is
    > > > submitted and in turn stalls the IO pipeline as showed by Eric's
    > > > blktrace data.
    > > >
    > > > Eric, will you test this patch? Thank you.
    >
    > Can you please run the blktrace again with this patch applied. I am curious
    > to see how does traffic pattern look like now.
    >
    > In your previous trace, there were so many small 8 sector requests which
    > were merged into 512 sector requests before dispatching to disk. (I am
    > not sure why those requests are not bigger. Shouldn't readahead logic
    > submit a bigger request?) Now with plug/unplug logic removed, I am assuming
    > we should be doing less merging and dispatching more smaller requests. May be
    > that is helping and cutting down on disk idling time.
    >
    > In previous logs, 512 sector request seems to be taking around 1ms to
    > complete after dispatch. In between requests disk seems to be idle
    > for around .5 to .6 ms. Out of this .3 ms seems to be gone in just
    > coming up with new request after completion of previous one and another
    > .3ms seems to be consumed in merging the smaller IOs. So if we don't wait
    > for merging, it should keep disk busier for .3ms more which is 30% of time
    > it takes to complete 512 sector request. So theoritically it can give
    > 30% boost for this workload. (Assuming request size will not impact the
    > disk throughput very severely).
    >
    > Anyway, some blktrace data will shed some light..
    yep, I suspect plug merges big request too (iostat shows it too), that's
    why I only think delete the plug in generic_file_aio_read as a
    workaround. I still thought readahead has something to do here. I
    observed the async readahead does readahead (A, A + 2M), and follows (A
    +128k, A+2M), (A+256k, A+2M) ..., the later readahead doesn't work
    because we already have (A, A+2M) in memory at that time. Anyway, I can
    reproduce the issue, will play with it more today.

    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

    \
     
     \ /
      Last update: 2012-01-31 01:17    [W:4.231 / U:0.080 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site