Messages in this thread | | | Date | Sun, 10 Jan 1999 22:57:38 GMT | From | "Stephen C. Tweedie" <> | Subject | Re: Reading few large blocks instead of many small when executing a file? |
| |
Hi,
On Tue, 29 Dec 1998 06:39:41 -0500 (EST), "Mark H. Wood" <mwood@IUPUI.Edu> said:
>> It struck me that reading for example 16kb at once is a lot faster than >> reading 4kb 4 times. So why not read 4 or 8 pages at every segfault (and >> when the program ?
> Not stupid at all, though its value might be affected by other details of > Linux memory management.
I implemented this recently, and the 2.2-pre kernels page in a cluster of 16 pages for all swap or mmap faults by default (fewer pages on smaller-memory machines). The performance improvement on sequential mmap(), swapin and program loading is remarkable.
> Actually your description is so much like VMS pagefault clustering > that it's scary.
It's not accidental, either --- my last job was working on VMS kernel internals. :)
> IIRC the default cluster size is a SYSGEN parameter (somewhat like > /proc/sys/somethingorother)
The default cluster size is /proc/sys/vm/page-cluster.
> Since programs' memory use patterns differ, I think a good deal of > performance measurement would be required to determine whether clustering > actually helps enough to be worth the added complexity, and whether making > the cluster size tunable is worthwhile. If we're reading 4kB/fault now > then there may not be much to be gained from clustering.
Probably not worth worrying about unless there is an madvise() hint, I'd expect.
--Stephen
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/
| |