lkml.org 
[lkml]   [2012]   [Oct]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] mm: readahead: remove redundant ra_pages in file_ra_state
On Fri, Oct 26, 2012 at 03:03:12PM +0800, Ni zhan Chen wrote:
> On 10/26/2012 02:58 PM, Fengguang Wu wrote:
> >> static void shrink_readahead_size_eio(struct file *filp,
> >> struct file_ra_state *ra)
> >> {
> >>- ra->ra_pages /= 4;
> >>+ spin_lock(&filp->f_lock);
> >>+ filp->f_mode |= FMODE_RANDOM;
> >>+ spin_unlock(&filp->f_lock);
> >>
> >>As the example in comment above this function, the read maybe still
> >>sequential, and it will waste IO bandwith if modify to FMODE_RANDOM
> >>directly.
> >Yes immediately disabling readahead may hurt IO performance, the
> >original '/ 4' may perform better when there are only 1-3 IO errors
> >encountered.
>
> Hi Fengguang,
>
> Why the number should be 1-3?

The original behavior is '/= 4' on each error.

After 1 errors, readahead size will be shrinked by 1/4
After 2 errors, readahead size will be shrinked by 1/16
After 3 errors, readahead size will be shrinked by 1/64
After 4 errors, readahead size will be effectively 0 (disabled)

Thanks,
Fengguang


\
 
 \ /
  Last update: 2012-10-26 09:41    [W:0.073 / U:0.248 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site