lkml.org 
[lkml]   [2009]   [Jul]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: fio sync read 4k block size 35% regression
From
Date
On Wed, 2009-07-01 at 20:50 +0800, Wu Fengguang wrote:
> On Wed, Jul 01, 2009 at 01:03:55PM +0800, Zhang, Yanmin wrote:
> > On Wed, 2009-07-01 at 12:10 +0800, Wu Fengguang wrote:
> > > On Wed, Jul 01, 2009 at 11:25:33AM +0800, Zhang, Yanmin wrote:
> > > > Comapraing with 2.6.30, fio sync read (block size 4k) has about 35% regression
> > > > with kernel 2.6.31-rc1 on my stoakley machine with a JBOD (13 SCSI disks).
> > > >
> > > > Every disk has 1 partition and 4 1-GB files. Start 10 processes per disk to
> > > > do sync read sequentinally.
> > > >
> > > > Bisected down to below patch.
> > > >
> > > > 51daa88ebd8e0d437289f589af29d4b39379ea76 is first bad commit
> > > > commit 51daa88ebd8e0d437289f589af29d4b39379ea76
> > > > Author: Wu Fengguang <fengguang.wu@intel.com>
> > > > Date: Tue Jun 16 15:31:24 2009 -0700
> > > >
> > > > readahead: remove sync/async readahead call dependency
> > > >
> > > > The readahead call scheme is error-prone in that it expects the call sites
> > > > to check for async readahead after doing a sync one. I.e.
> > > >
> > > > if (!page)
> > > > page_cache_sync_readahead();
> > > > page = find_get_page();
> > > > if (page && PageReadahead(page))
> > > > page_cache_async_readahead();
> > > >
> > > >
> > > > I also test block size 64k and 128k, but they don't have regression. Perhaps
> > > > the default read_ahead_kb is equal to 128?
> > > >
> > > > Other 2 machines have no such regression. The JBODS of the 2 machines consists
> > > > of 12 and 7 SATA/SAS disks while every disk has 2 partitions.
> > >
> > > Yanmin, thanks for the tests!
> > >
> > > Maybe the patch posted here can restore the performance:
> > >
> > > http://lkml.org/lkml/2009/5/21/319
> > I tried it and it doesn't help.
>
> Then let's check what's happening behind the scene :)
>
> Please apply the attached patch and run
>
> echo 1 > /debug/readahead/trace_enable
> # do benchmark
> echo 0 > /debug/readahead/trace_enable
>
> and send the dmesg which will contain lots of lines like
>
> [ 54.738105] readahead-initial0(pid=3290(zsh), dev=00:10(0:10), ino=105910(dmesg), req=0+1, ra=0+4-3, async=0) = 2
> [ 54.751801] readahead-subsequent(pid=3290(dmesg), dev=00:10(0:10), ino=105910(dmesg), req=1+60, ra=4+8-8, async=1, miss=0) = 0
I enlarged sys log buffer to 2MB and captured below data.

In addition, I added new test cases to use mmap to read the files sequentionally.
On this machine, there is about 40% regression. reverting your patch fixes it.

On another machine with another JBOD (7 SAS disks), fio_mmap_sync_read_4k (64k/128k)
has about 30% regression. But it's not caused by your patch. I am bisecting it on the
2nd machine now.

Yanmin

[unhandled content-type:application/x-compressed-tar]
\
 
 \ /
  Last update: 2009-07-02 05:37    [W:0.122 / U:0.396 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site