lkml.org 
[lkml]   [1999]   [Jul]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: clustering page-ins
On Mon, 19 Jul 1999, Jamie Lokier wrote:

> Yes it's good that read() was optimised first, no it's not good to
> leave mmap() unoptimised, IMO.
>
> A number of "highly optimised" streaming designs use mmap(). Things
> like http servers, audio & video players. Unfortunately they currently
> take a relative hit due to lack of asynchronous readahead, so the bright
> ones leave it configurable.

I have an idea which might help curb this problem.

Currently, we only do readahead of the cluster the requested
page falls in (and only if the page is not in memory). This
works rather well, but not good enough for all apps.

By defining the top and bottom 1/8th of each zone a 'border'
page and loading the next zone when a page in the border zone
is loaded (AND the requested page is already in memory, meaning
we have already done readahead on the current zone and it proved
to be succesful), THEN we read in the zone nextdoor.

Because the previous readahead was (apparently) succesful, we
know we have a large chance of reading in data we will actually
need PLUS the algorithm works for both read-ahead and read-behind
(and zoned data files where the zones exceed cluster size).

> > however, as the number of extra pages read during read-ahead increases,
> > the overall efficiency of the system as a whole drops,
>
> On this I disagree. If a big disk read is not much slower than a small
> disk read (i.e. a modern disk), but random head movement is slow, I'd
> have thought the added clustering due to read-ahead would reduce the
> total number of head movements when two applications are competing for
> different areas of the disk.

Clustering does indeed work in this way. I have observed
(during a busy time of my emergeny packetstorm mirror) a
VERY loaded 486/66 doing about 7000(!) pagefaults a second
but 'only' 70 disk operations a second.

During normal operation, the number of blocks/pages read
in is approximately ten times the number of disk operations.
This means one tenth of the _very_ expensive disk seeks.
And with disk seeks being FAR more expensive than memory, I
guess this is a rather good tradeoff (if we accidentally
evict a needed-later page by reading ahead 16 pages, it
will mean one 'extra' seek instead of fifteen!).

regards,

Rik -- Open Source: you deserve to be in control of your data.
+-------------------------------------------------------------------+
| Le Reseau netwerksystemen BV: http://www.reseau.nl/ |
| Linux Memory Management site: http://www.linux.eu.org/Linux-MM/ |
| Nederlandse Linux documentatie: http://www.nl.linux.org/ |
+-------------------------------------------------------------------+


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:53    [W:1.566 / U:0.048 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site