lkml.org 
[lkml]   [2000]   [Mar]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: Overcommitable memory??
Date
On Sun, 19 Mar 2000 15:43:14 -0600, you wrote:
>On Sun, 19 Mar 2000, James Sutherland wrote:
>>On Fri, 17 Mar 2000 22:24:38 -0600, you wrote:
>>>On Fri, 17 Mar 2000, James Sutherland wrote:
>>>>On Fri, 17 Mar 2000, Andreas Bombe wrote:
>>>>
>>>>> On Thu, Mar 16, 2000 at 11:04:23AM +0000, James Sutherland wrote:
>>>>> > On 15 Mar 2000, Rask Ingemann Lambertsen wrote:
>>>>> > > Not at all. COW is a performance optimisation which does not depend on
>>>>> > > overcommitment of memory in any way. Why would you want to turn it off?
>>>>> >
>>>>> > Because it *IS* overcommitment of memory. You can have two processes, each
>>>>> > with their 200Mb of data, in a machine with 256Mb RAM+swap, quite happily
>>>>> > - until they start writing to it, at which point you discover you have
>>>>> > overcommitted your memory, and things go wrong.
>>>>>
>>>>> He means avoiding overcommit by counting vm requirements but without
>>>>> actually copying COW pages (denying a COW allocation if it could not
>>>>> be faulted in at a later time). Resulting in vast areas of unused
>>>>> RAM.
>>>>
>>>>Yes, I know. This does avoid the performance hit - but it still wastes
>>>>obscene amounts of swap space unnecessarily, and makes the original
>>>>problem worse by reducing the available amount of memory.
>>>>
>>>>On a WWW server with 100 Apache processes of 20Mb, for example, I would
>>>>need 30Mb or so normally - or 2Gb with this strategy, even though 1.97Gb
>>>>of this is never used. This means I will run out of memory a LOT sooner -
>>>>I have 1.97Gb less VM than I otherwise would! This certainly doesn't help
>>>>the original problem...
>>>
>>>Shoudn't need that much -- the text should be shareable and only counted
>>>once.
>>
>>Only true if you write protect the text.
>>
>>> The data space is going to change anyway.
>>
>>No, much of the data space is also shared - but writable, so disabling
>>overcommit means reserving pages "just in case".
>
>I don't want to overcommit. If I do, then it must at managements direction,
>not because they have no choice.

You mean you don't want demand allocation of memory, you want it to be
pre-allocated? Why should "management" wish to modify the behaviour of
malloc() etc.?

>>> Each slave server will
>>>handle different requests, giving each different data space. It will have
>>>to exist anyway. Under any form of resource allocation it should be given
>>>30-40MB.
>>
>>No - 100 Apache processes do not take anything like this amount.
>>Almost the entire data space is shared.
>
>Sorry - when using modules, each data space of the slave server may get
>more/less. Yes the base configuration data doesn't change, but that is
>a very small part of what goes into a web server.

The point is, most of the process is really shared. That's how 100
processes fit into much less than 100 times the memory of one. Even if
only 10% of the process space were per-process, I would have
encountered OOM problems without overcommit Overcommit made the
difference between the WWW server working, and it running out of
memory.

>>>If I remember how Apache works correctly - it will recover when a child
>>>process exceeds some quota. The child will terminate and the parent
>>>will be notified. If necessary (under load) the parent will respawn
>>>a new slave server.
>>>
>>>In the current situation, any process may be terminated - even the parent.
>>>When that process dies, all of the slave servers die.
>>>
>>>Which is worse: A properly terminated process that is exceeding a resource
>>>limit, or a random abort that may crash the system?
>>
>>You still seem to be arguing in favour of per-user resource limits,
>>rather than disabling overcommit. I wholeheartedly agree that Linux
>>should have per-user rlimits - but that's not the issue here.
>
>It is part of the issue. The system would never go OOM, users would go
>OOR (Out-Of-Resources). Out of resources is a manageable entity, that
>can be adjusted from the results of performance analysis. OOM is a
>catastrophic failure of the system.

No. Running out of application memory is *NOT* a "catastrophic
failure". It's just an error condition, like a full disk: you need to
move or delete some things to make more space.

> If the system doesn't provide a
>way to control it, direct which user is at fault, and as directed by
>management policy, then that system is considered buggy and not ready
>for production use.
>
>>Besides, the "random abort that may crash the system" is not the
>>alternative. It is a choice of WHICH process gets the OOM error first
>>- the "true culprit" (the memory hog), or any old process which
>>happens to want memory?
>
>Right now there is no way to determine which proces should get terminated.

There will never be a totally reliable way to determine which process
the administrator would like to terminate, given the choice; it's too
late for that by the time we start blocking requests. When Rik's patch
cuts in, it's a case of "Something has to die. Which one is it?"


James.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:57    [W:0.169 / U:0.064 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site