lkml.org 
[lkml]   [2015]   [Aug]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH] mm: use only per-device readahead limit
Date
21.08.2015, 21:17, "Linus Torvalds" <torvalds@linux-foundation.org>:
> On Fri, Aug 21, 2015 at 10:12 AM, Roman Gushchin <klamm@yandex-team.ru> wrote:
>>  There are devices, which require custom readahead limit.
>>  For instance, for RAIDs it's calculated as number of devices
>>  multiplied by chunk size times 2.
>
> So afaik, the default read-ahead size is 128kB, which is actually
> smaller than the old 512-page limit.
>
> Which means that you probably changed "ra_pages" somehow. Is it some
> system tool that does that automatically, and if so based on what,
> exactly?

It's just a raid driver. For instance, drivers/ms/raid5.c:6898 .

On my setup I got unexpectedly even slight perfomance increase
over O_DIRECT case and over old memory-based readahead limit,
as you can see from numbers in the commit message (1.2GB/s vs 1.1 GB/s).

So, I like an idea to delegate the readahead limit calculation to the underlying i/o level.

> I'm also slightly worried about the fact that now the max read-ahead
> may actually be zero,

For "normal" readahead nothing changes. Only readahead syscall and
madvise(MADV_WILL_NEED) cases are affected.
I think, it's ok to do nothing, if readahead was deliberately disabled.

> and/or basically infinite (there's a ioctl to
> set it that only tests that it's not negative). Does everything react
> ok to that?

It's an open question, if we have to add some checks to avoid miss-configuration.
In any case, we can check the limit on setting rather then adjust them dynamically.

--
Roman


\
 
 \ /
  Last update: 2015-08-21 22:41    [W:0.067 / U:0.104 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site