Messages in this thread | | | Subject | Re: Memory overcommitting (was Re: http://www.redhat.com/redhat/) | Date | Thu, 20 Feb 1997 21:54:24 -0500 | From | shendrix@escape ... |
| |
In message <199702202040.PAA06102@enterprise.wyszynsk-ppp.clark.net>, John Wysz ynski writes:
> Thanks to all who have lobbed missiles at me, especially those who believe > that they known all that can be known. I simply cannot respond to them all. > If this method of allocating memory is indeed as wide spread as some have > claimed, it hasn't been going on as long as some of you "experts" claim.
Then show us which system in the past did it better, and how we can modify the current kernel to handle the new idea.
About the only thing ``better'' would be to have enough silicon for each program to have everything it could possibly need, all to itself. In the absence of that, you have to come up with a scheme that satisfies the various cases as best it can. Alternatively you could make sure your program will FIT in the system you have to run it on.
Most of what you are talking about is characteristic of resource allocation systems. Such systems are never perfect, they cannot be. They exist because of imperfection: limited resources. Some handle it better than others. I'm not sure Linux is the best, but it is pretty good in most cases.
> It may be the explanation why in the last few years I have seen so many > programs die for no cause in the middle of the day. (On non-Linux systems
If there was no cause, then why did they die?
I'm betting on two reasons: programming error or a program too big for the chosen system.
> so far.) In an operational environment, such havoc is not appreciated.
Name one where such things don't happen. I'm willing to be if you know of one, it will be one where a brute-force approach was used. Works great if you have the resources. Otherwise, you have no choice but a resource allocation system that will fail under some conditions.
| |