lkml.org 
[lkml]   [2009]   [May]   [31]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] readahead:add blk_run_backing_dev

At 11:37 09/06/01, Wu Fengguang wrote:
>On Wed, May 27, 2009 at 11:06:37AM +0800, Hisashi Hifumi wrote:
>>
>> At 11:57 09/05/27, Wu Fengguang wrote:
>> >On Wed, May 27, 2009 at 10:47:47AM +0800, Hisashi Hifumi wrote:
>> >>
>> >> At 11:36 09/05/27, Wu Fengguang wrote:
>> >> >On Wed, May 27, 2009 at 10:21:53AM +0800, Hisashi Hifumi wrote:
>> >> >>
>> >> >> At 11:09 09/05/27, Wu Fengguang wrote:
>> >> >> >On Wed, May 27, 2009 at 08:25:04AM +0800, Hisashi Hifumi wrote:
>> >> >> >>
>> >> >> >> At 08:42 09/05/27, Andrew Morton wrote:
>> >> >> >> >On Fri, 22 May 2009 10:33:23 +0800
>> >> >> >> >Wu Fengguang <fengguang.wu@intel.com> wrote:
>> >> >> >> >
>> >> >> >> >> > I tested above patch, and I got same performance number.
>> >> >> >> >> > I wonder why if (PageUptodate(page)) check is there...
>> >> >> >> >>
>> >> >> >> >> Thanks! This is an interesting micro timing behavior that
>> >> >> >> >> demands some research work. The above check is to confirm if it's
>> >> >> >> >> the PageUptodate() case that makes the difference. So why that case
>> >> >> >> >> happens so frequently so as to impact the performance? Will it also
>> >> >> >> >> happen in NFS?
>> >> >> >> >>
>> >> >> >> >> The problem is readahead IO pipeline is not running smoothly,
>which is
>> >> >> >> >> undesirable and not well understood for now.
>> >> >> >> >
>> >> >> >> >The patch causes a remarkably large performance increase. A 9%
>> >> >> >> >reduction in time for a linear read? I'd be surprised if the workload
>> >> >> >>
>> >> >> >> Hi Andrew.
>> >> >> >> Yes, I tested this with dd.
>> >> >> >>
>> >> >> >> >even consumed 9% of a CPU, so where on earth has the kernel gone to?
>> >> >> >> >
>> >> >> >> >Have you been able to reproduce this in your testing?
>> >> >> >>
>> >> >> >> Yes, this test on my environment is reproducible.
>> >> >> >
>> >> >> >Hisashi, does your environment have some special configurations?
>> >> >>
>> >> >> Hi.
>> >> >> My testing environment is as follows:
>> >> >> Hardware: HP DL580
>> >> >> CPU:Xeon 3.2GHz *4 HT enabled
>> >> >> Memory:8GB
>> >> >> Storage: Dothill SANNet2 FC (7Disks RAID-0 Array)
>> >> >
>> >> >This is a big hardware RAID. What's the readahead size?
>> >> >
>> >> >The numbers look too small for a 7 disk RAID:
>> >> >
>> >> > > #dd if=testdir/testfile of=/dev/null bs=16384
>> >> > >
>> >> > > -2.6.30-rc6
>> >> > > 1048576+0 records in
>> >> > > 1048576+0 records out
>> >> > > 17179869184 bytes (17 GB) copied, 224.182 seconds, 76.6 MB/s
>> >> > >
>> >> > > -2.6.30-rc6-patched
>> >> > > 1048576+0 records in
>> >> > > 1048576+0 records out
>> >> > > 17179869184 bytes (17 GB) copied, 206.465 seconds, 83.2 MB/s
>> >> >
>> >> >I'd suggest you to configure the array properly before coming back to
>> >> >measuring the impact of this patch.
>> >>
>> >>
>> >> I created 16GB file to this disk array, and mounted to testdir, dd to
>> >this directory.
>> >
>> >I mean, you should get >300MB/s throughput with 7 disks, and you
>> >should seek ways to achieve that before testing out this patch :-)
>>
>> Throughput number of storage array is very from one product to another.
>> On my hardware environment I think this number is valid and
>> my patch is effective.
>
>What's your readahead size? Is it large enough to cover the stripe width?

Do you mean strage's readahead size?



\
 
 \ /
  Last update: 2009-06-01 04:59    [W:0.156 / U:0.388 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site