[lkml]   [2000]   [Mar]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
    SubjectRe: Overcommitable memory??
    James Sutherland writes:
    > On 15 Mar 2000, Rask Ingemann Lambertsen wrote:

    >> Not at all. COW is a performance optimisation which does not depend on
    >> overcommitment of memory in any way. Why would you want to turn it off?

    > Because it *IS* overcommitment of memory. You can have two processes, each
    > with their 200Mb of data, in a machine with 256Mb RAM+swap, quite happily
    > - until they start writing to it, at which point you discover you have
    > overcommitted your memory, and things go wrong.

    You're conflating two things: the COW optimization and whether or not
    virtual memory is actually reserved. For example, in a system that
    doesn't overcommit, suppose you have process that forks: at that
    point, the kernel reserves enough pages of virtual memory to be able
    to give the new process unique pages if it needs them. COW means that
    those reserved pages are only pressed into service when they are
    actually written to.

    How many pages is enough? In the case of a fork, you only need to
    reserve pages for the writable pages of the old process. The
    read-only pages (the program text segment) can be shared (and have the
    binary as backing store to boot). On an exec, the kernel will of
    course reset the count of reserved pages to match the new executable.
    (And the exec could fail if it tries to start a new program that
    requires a larger data segment than available memory allows.)

    The thing about fork/exec is that the requirement for extra virtual
    memory when a large process forks a small program (emacs forks ls) is

    Read-only data is not a problem, so apart from fork/exec, how many
    cases are there where you have processes sharing large numbers of
    writable pages? Note that for overcommitment to actually "work" in
    those cases, those pages should hardly ever be written to: if they are
    all touched in the long run, then you do really need the extra memory,
    and reserving it now will prevent nasty surprises later. And if the
    pages are de-facto read-only, would it not be better if the
    application marked them as such before forking?

    I have some experience with the pros and cons of overcommitment on
    IRIX workstations, where you can specify how many pages the kernel is
    allowed to overcommit. When the system is stressed and overcommitment
    isn't allowed, the first sign is typically that you cannot print from
    netscape or something similarly irritating. When overcommitment is
    allowed, the first sign is processes dying at random, with the X
    server usually among the first to go. I don't overcommit at all.

    If during normal work you get processes killed due to overcommitment,
    or unable to fork, exec, or malloc due to memory shortage, you need to
    either get more (virtual) memory or lessen the workload.

    One thing that irks me about the current discussion is the complete
    lack of data: I would be interesting to know how much additional VM a
    sane non-overcommitting regime requires when compared with the
    overcommitting case? It seems no-one actually knows.

    Olaf Weber

    Do not meddle in the affairs of sysadmins,
    for they are quick to anger and have no need for subtlety.

    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to
    Please read the FAQ at

     \ /
      Last update: 2005-03-22 13:57    [W:0.022 / U:82.064 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site