Messages in this thread | | | Subject | Re: Overcommitable memory?? | Date | Sat, 18 Mar 2000 13:32:01 -0400 | From | Horst von Brand <> |
| |
Olaf Weber <olaf@infovore.xs4all.nl> said:
[...]
> Read-only data is not a problem, so apart from fork/exec, how many > cases are there where you have processes sharing large numbers of > writable pages? Note that for overcommitment to actually "work" in > those cases, those pages should hardly ever be written to: if they are > all touched in the long run, then you do really need the extra memory, > and reserving it now will prevent nasty surprises later. And if the > pages are de-facto read-only, would it not be better if the > application marked them as such before forking?
Many architectures don't know of read-only data, so you must consider them writeable anyway. Shared libraries often live in writeable segments, as addresses need to be fixed up. And it is the case that writeable stuff is shared (on fork(2), each single time). Might be short-lived, but without overcommit it HAS to be available _now_. With overcommit you can hope that not every other user does cash in immediately on reserved memory, or that some other process exits and frees some before you need it. Marking "de-facto read only by the process" is error-prono, and will be done wrong enough of the time it will be taken back. OOM is rare, program chrashing because "I thought read-only but overlooked..." will be frequent.
Next time go ask your bank if they have the cash handy for the case that everybody decides to close their accounts the same day. -- Horst von Brand vonbrand@sleipnir.valparaiso.cl Casilla 9G, Viña del Mar, Chile +56 32 672616
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/
| |