Messages in this thread | | | Date | Mon, 28 Dec 1998 18:28:33 +0100 | From | Hans Lofving <> | Subject | Reading few large blocks instead of many small when executing a file? |
| |
Hi all, I hope this is the right place to post this to, forgive me if it is not. I was reading "The Linux Kernel" ( http://metalab.unc.edu/LDP/LDP/tlk/tlk.html ) and, please correct me if I'm wrong, but as far as I understood the mm part of tlk, linux only reads the size of one memory page from an executable file when it is run, and then, each time more of the file needs to be read from the hard drive, only one page at a time will be read. It struck me that reading for example 16kb at once is a lot faster than reading 4kb 4 times. So why not read 4 or 8 pages at every segfault (and when the program ? I am very uncertain about if this is a good idea or not, I understand that this would not be a good idea for systems with not so much memory, but will it not be a performance boost for systems that can "afford" this memory waste? Perhaps the number of pages that Linux should read at a time could be configurable at compile time or even when Linux is running through the /proc filesystem? Or perhaps the amount of pages read at a time could be decided by looking at the size of the executable? If this is just a very stupid idea please let me know, so that i can forget about it as soon as possible =)
-- Hans Löfving
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/
| |