lkml.org 
[lkml]   [2009]   [May]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH] readahead:add blk_run_backing_dev
On Wed, May 27, 2009 at 10:36:01AM +0800, Andrew Morton wrote:
> On Wed, 27 May 2009 11:21:53 +0900 Hisashi Hifumi <hifumi.hisashi@oss.ntt.co.jp> wrote:
>
> >
> > At 11:09 09/05/27, Wu Fengguang wrote:
> > >On Wed, May 27, 2009 at 08:25:04AM +0800, Hisashi Hifumi wrote:
> > >>
> > >> At 08:42 09/05/27, Andrew Morton wrote:
> > >> >On Fri, 22 May 2009 10:33:23 +0800
> > >> >Wu Fengguang <fengguang.wu@intel.com> wrote:
> > >> >
> > >> >> > I tested above patch, and I got same performance number.
> > >> >> > I wonder why if (PageUptodate(page)) check is there...
> > >> >>
> > >> >> Thanks! This is an interesting micro timing behavior that
> > >> >> demands some research work. The above check is to confirm if it's
> > >> >> the PageUptodate() case that makes the difference. So why that case
> > >> >> happens so frequently so as to impact the performance? Will it also
> > >> >> happen in NFS?
> > >> >>
> > >> >> The problem is readahead IO pipeline is not running smoothly, which is
> > >> >> undesirable and not well understood for now.
> > >> >
> > >> >The patch causes a remarkably large performance increase. A 9%
> > >> >reduction in time for a linear read? I'd be surprised if the workload
> > >>
> > >> Hi Andrew.
> > >> Yes, I tested this with dd.
> > >>
> > >> >even consumed 9% of a CPU, so where on earth has the kernel gone to?
> > >> >
> > >> >Have you been able to reproduce this in your testing?
> > >>
> > >> Yes, this test on my environment is reproducible.
> > >
> > >Hisashi, does your environment have some special configurations?
> >
> > Hi.
> > My testing environment is as follows:
> > Hardware: HP DL580
> > CPU:Xeon 3.2GHz *4 HT enabled
> > Memory:8GB
> > Storage: Dothill SANNet2 FC (7Disks RAID-0 Array)
> >
> > I did dd to this disk-array and got improved performance number.
> >
> > I noticed that when a disk is just one HDD, performance improvement
> > is very small.
> >
>
> Ah. So it's likely to be some strange interaction with the RAID setup.

The normal case is, if page N become uptodate at time T(N), then
T(N) <= T(N+1) holds. But for RAID, the data arrival time depends on
runtime status of individual disks, which breaks that formula. So
in do_generic_file_read(), just after submitting the async readahead IO
request, the current page may well be uptodate, so the page won't be locked,
and the block device won't be implicitly unplugged:

if (PageReadahead(page))
page_cache_async_readahead()
if (!PageUptodate(page))
goto page_not_up_to_date;
//...
page_not_up_to_date:
lock_page_killable(page);


Therefore explicit unplugging can help, so

Acked-by: Wu Fengguang <fengguang.wu@intel.com>

The only question is, shall we avoid the double unplug by doing this?

---
mm/readahead.c | 10 ++++++++++
1 file changed, 10 insertions(+)

--- linux.orig/mm/readahead.c
+++ linux/mm/readahead.c
@@ -490,5 +490,15 @@ page_cache_async_readahead(struct addres

/* do read-ahead */
ondemand_readahead(mapping, ra, filp, true, offset, req_size);
+
+ /*
+ * Normally the current page is !uptodate and lock_page() will be
+ * immediately called to implicitly unplug the device. However this
+ * is not always true for RAID conifgurations, where data arrives
+ * not strictly in their submission order. In this case we need to
+ * explicitly kick off the IO.
+ */
+ if (PageUptodate(page))
+ blk_run_backing_dev(mapping->backing_dev_info, NULL);
}
EXPORT_SYMBOL_GPL(page_cache_async_readahead);

\
 
 \ /
  Last update: 2009-05-27 05:57    [W:0.127 / U:1.048 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site