Messages in this thread | | | From | James Sutherland <> | Subject | Re: Overcommitable memory?? | Date | Sun, 19 Mar 2000 20:23:20 +0000 |
| |
On Sat, 18 Mar 2000 08:31:58 -0600, you wrote:
>On Fri, 17 Mar 2000, Rik van Riel wrote: >>On Fri, 17 Mar 2000, Stephen C. Tweedie wrote: >>> On Tue, 14 Mar 2000 14:32:49 -0300 (BRST), Rik van Riel >>> <riel@conectiva.com.br> said: >>> >>> >> Without overcommit that just can not happen. There will be >>> >> either a free page of memory or a free page of swap into which >>> >> you can swap something else out. >>> >>> > Without overcommit it can still happen, unless you reserve one >>> > page of swap space for every page of data that gets mmap()ed... >>> >>> Even that doesn't cure things. Think about stack growth. >> >>Exactly. >> >>I'd really like the non-overcommit fans to come up >>with a good reason why a non-overcommitting system >>doesn't suffer from the same problem, but all I've >>seen so far are changes to the problem :) >> >>(and no, I don't believe there is any solution to >>OOM on any system that allows userspace to dynamically >>allocate memory ....) > >As long as you remain within your resource quota limits... >Using resource accounting and quota controls will prevent the SYSTEM >from OOM conditions.
It was never a system problem to begin with. The question is, which process does the system kill? It has to kill one process - killing the one which requested the memory is a case of "shooting the messenger".
> Users (and processes) will get an Out-Of-Resource >errors which the users can deal with.
In practice, they usually just die.
> The system administrators and >managemnt can analyze the resource accounting and determin where to >assign/reassign allocations of a finite resource.
Most systems don't have/suffer from that degree of micromanagement.
> In all cases, the >system should not crash.
I never suggested it should - if it ever crashes, this is a bug. This is NOT what we are discussing here. The system doesn't crash - it just kills one or more processes. The issue is, WHICH process?
> System processes should not abort unless they >are exceeding the resources allocated to them. User processes should >not be able to cause random process aborts.
Which is the aim of Rik's patch, largely. Until we actually have per-user resource quotas, though, there is not a lot you can do about it.
>This also allows self recovery from some forms of DoS attacks. If >telnetd recieves 1000 connections then it could exceed resource >limits. inetd may die (or sendmail or...). With resource limits >in place, at least inetd would be able to handle that - if say >no more than 50 concurrent interactive logins were allowed, the >other connections would be denied.
No - inetd just gets an OOM error, and dies. Same as before.
> The same goes for Apache which >DOES do this.
No. If a child dies, it spawns a replacement (just like inetd, init etc. do). If the parent dies (as it would, for OOM situations) the WWW site is down.
> If the proper resource allocation were given then >usrs running wild memory allocators would not be able to crash >the system by causing init to die.
They shouldn't be able to anyway - Rik's patch should ensure that init will always survive, even if every other process on the box is hosed. (Of course, if init was the malfunctioning process to begin with...)
>All else is a matter of implementation.
The core problem remains. You, the user, have a finite amount of memory available to you, however that limit is defined. Once you run out, your processes will start dying.
We need per-user resource limits, ideally - until then, everything is done with process granularity instead. This is a shortcoming we all know about already, but not one that is likely to be fixed any time soon.
James.
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/
| |