lkml.org 
[lkml]   [1998]   [Dec]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: mmap() is slower than read() on SCSI/IDE on 2.0 and 2.1
Date
From
>Because madvise() is a kludge, and is why we don't have such a beast
>in Linux.
>

I said: try madvise().
You said: it's a kludge.
I said: why is madvise() a kludge.
You said: Because madvise() is a kludge.

Let me go out on a limb again: why is madvise() a kludge?
How else could you tell the VM about single and random use pages?

>You know exactly at a read() call:
>
>1) Where in the file.
>2) How much the user wants in this request.
>
>For page faults you know exactly where but your "how much" is constant
>per request, that is PAGE_SIZE. This is the core problem.
>

I don't think that you learn that much my using read(), since most programs
that use a static size buffer. In that sense it is very similar to mmap().
mmap() = read() with PAGE_SIZE length.

>This is what mmap() faults currently do, one page of readahead. On
>the read() side it is much more aggressive. More agressive read-ahead
>leads to fatter and more efficient I/O requests.
>

I can see this for read() on grep now. After looking at the grep source
I see how it read is blocks of a size that is a multiple PAGE_SIZE. In
this case read() can prefetch multiple pages, wheren mmap() would only
be able to prefetch a single page.

>Now one possible solution is to keep some kind of page fault history
>around per VM area. That is what some systems do.
>

This sounds good. In practice how well does it change perform.

>BTW, there is a neat way you could increase the filemap_nopage()
>prefetching if the copy from the user's mmap()'d area happens from the
>kernel (ie. in a non-sendfile() socket write for example). Here you
>know the amount of data you will be copying, so you could add a
>per-vma "faultahead" hint value, then filemap_nopage() uses this
>exactly how generic_readpage() uses the 'len' parameter to prefetching
>heuristics.
>

The faultahead value sound intersting. But at this point, doesn't this
get close to madvise(), except you don't have the ability to hint about
other types of access. I think that it would be a good addition to madvise().


>Programs in your system, as they run, fault in different pages of the
>C library right? After some time, more and more of libc resides in
>the page cache and no I/O is needed. So if the optimization is "at
>mmap() time, setup the page table entries for pages which we have in
>ram already right then" what is your upper bound on this? The problem
>is what if this is just some short lived program which only needs one
>or two pages or libc to do it's work and then exit()? We don't want
>to spend all of our time setting up all of his page tables when he
>will use only a few.
>

Hmmm... I guess that I will have to look at the Linux equivalents of
BSD's vm_map and vm_map_entry. I don't know enough to comment about
this. I probably need to see how Linux sets ups the VM map on
process creation.

>However this is an important optimization, because when it helps, it
>helps a lot. I had code which did this once, but because the
>heuristic was difficult to come by, I threw that work away. It was
>amusing, when the system first came up benchmarks ran incredibly fast,
>but after some time and system usage they degraded horribly.
>

Degrade beyond performance without the optimizations? Why would it
degrade? It would probably be interesting to try to improve on your
work and see if the performances can be made to stick. Do you have
the code segments?

>Later,
>David S. Miller
>davem@dm.cobaltmicro.com
>

Thanks for help.

-jay

P.S., I still think that madvise() would be useful.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:46    [W:0.040 / U:3.340 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site