lkml.org 
[lkml]   [1999]   [Jul]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC] - Some notions that I would like comments on
On Sun, 18 Jul 1999, Jamie Lokier wrote:
> It's not about the size of clusters or having potentially more data in
> cache already. It's about having _all_ the data in cache before it's
> needed no matter how large the file.

well, yes, that's the ideal.

> This is done by tracking soft page faults -- when a page that is
> _already_ in cache is mapped into the process on a fault. In other
> words, the found_page case is the interesting one here. no_cached_page
> is also interesting but is never reached once sequential readahead
> reaches a steady state!

i think the biggest argument against this approach is that today, the page
fault fast path in filemap_nopage is very straight and uncomplicated.
adding read-ahead logic to that path will slow everyone down all the time.
the whole reason to do read-ahead only when waiting for a page is that
there is nothing else to do. the overhead of scheduling the extra reads
is swallowed by the wait for the page we really want.

can you explain why you believe that you will have better read-ahead
information in the found_page case than we already get in no_cached_page?
with my patch, the second run of a program whose WS almost exceeds
physical RAM size will only read pages that have been made up to date.
that's pretty ideal, i think.

my patch extracts the cluster page-in logic into a separate function, so
it's easy to move and copy. i tried putting this function right at the
top of filemap_nopage, and in found_page. both cases were slower than the
stock kernel, i believe, because they slowed down the case that is most
prevalent by far: where the page already resides in the page cache.

> The net result is once the readahead window opens up enough,
> filemap_nopage never waits on I/O, not even once per cluster. With the
> code in place it might even be worth using readahead that's smaller than
> the random access readaround cluster size. More I/Os, less memory used.

more i/o's means less disk scalability, IMO. if each program is reading
ahead with more i/o's, that means there's less disk bandwidth and page
cache space to share with other programs on the system. this may not
affect a desktop system much, but it matters a great deal on servers, and
definitely will make the knee between "system-wide WS fits in memory" to
"system-wide WS no longer fits in memory" a much steeper one.

i believe that ideally, i/o count for a read-ahead enabled system should
be as close to a non read-ahead enabled system as possible. this reduces
cache pollution and disk bandwidth requirements. but it's a trade-off
between making a single application go as fast as the wind v. allowing all
applications that are running to share the system's available resources
fairly.

- Chuck Lever
--
corporate: <chuckl@netscape.com>
personal: <chucklever@netscape.net> or <cel@monkey.org>

The Linux Scalability project:
http://www.citi.umich.edu/projects/linux-scalability/


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:53    [W:0.123 / U:0.780 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site