lkml.org 
[lkml]   [2000]   [Mar]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: Overcommitable memory??
Date
On Fri, 17 Mar 2000 22:24:38 -0600, you wrote:

>On Fri, 17 Mar 2000, James Sutherland wrote:
>>On Fri, 17 Mar 2000, Andreas Bombe wrote:
>>
>>> On Thu, Mar 16, 2000 at 11:04:23AM +0000, James Sutherland wrote:
>>> > On 15 Mar 2000, Rask Ingemann Lambertsen wrote:
>>> > > Not at all. COW is a performance optimisation which does not depend on
>>> > > overcommitment of memory in any way. Why would you want to turn it off?
>>> >
>>> > Because it *IS* overcommitment of memory. You can have two processes, each
>>> > with their 200Mb of data, in a machine with 256Mb RAM+swap, quite happily
>>> > - until they start writing to it, at which point you discover you have
>>> > overcommitted your memory, and things go wrong.
>>>
>>> He means avoiding overcommit by counting vm requirements but without
>>> actually copying COW pages (denying a COW allocation if it could not
>>> be faulted in at a later time). Resulting in vast areas of unused
>>> RAM.
>>
>>Yes, I know. This does avoid the performance hit - but it still wastes
>>obscene amounts of swap space unnecessarily, and makes the original
>>problem worse by reducing the available amount of memory.
>>
>>On a WWW server with 100 Apache processes of 20Mb, for example, I would
>>need 30Mb or so normally - or 2Gb with this strategy, even though 1.97Gb
>>of this is never used. This means I will run out of memory a LOT sooner -
>>I have 1.97Gb less VM than I otherwise would! This certainly doesn't help
>>the original problem...
>
>Shoudn't need that much -- the text should be shareable and only counted
>once.

Only true if you write protect the text.

> The data space is going to change anyway.

No, much of the data space is also shared - but writable, so disabling
overcommit means reserving pages "just in case".

> Each slave server will
>handle different requests, giving each different data space. It will have
>to exist anyway. Under any form of resource allocation it should be given
>30-40MB.

No - 100 Apache processes do not take anything like this amount.
Almost the entire data space is shared.

>If I remember how Apache works correctly - it will recover when a child
>process exceeds some quota. The child will terminate and the parent
>will be notified. If necessary (under load) the parent will respawn
>a new slave server.
>
>In the current situation, any process may be terminated - even the parent.
>When that process dies, all of the slave servers die.
>
>Which is worse: A properly terminated process that is exceeding a resource
>limit, or a random abort that may crash the system?

You still seem to be arguing in favour of per-user resource limits,
rather than disabling overcommit. I wholeheartedly agree that Linux
should have per-user rlimits - but that's not the issue here.

Besides, the "random abort that may crash the system" is not the
alternative. It is a choice of WHICH process gets the OOM error first
- the "true culprit" (the memory hog), or any old process which
happens to want memory?


James.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:57    [W:0.522 / U:0.008 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site