lkml.org 
[lkml]   [2000]   [Mar]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: Some questions about linux kernel.
Date
On Fri, 17 Mar 2000, James Sutherland wrote:
>On Fri, 17 Mar 2000, Jesse Pollard wrote:
>> Paul Jakma <paul.jakma@compaq.com>
>> > On Thu, 16 Mar 2000, James Sutherland wrote:
>> >
>> > > No. *ANY* memory allocation system can run out of memory. Avoiding
>> > > "overcommitting" would make the OOM situation arise SOONER (and more
>> > > frequently), as well as killing performance.
>> > >
>> >
>> > well, it's a more a question of whether you make promises that you might
>> > not be able to keep. If you do (ie overcommit) then it's your
>> > (kernel) problem. If you don't, it's not.
>> >
>> > Without overcommit the /system/ can run out memory, of course - it's
>> > finite - but it's no longer a kernel problem.
>> >
>> > > Right... Now we'll try this on the university's central Unix system, shall
>> > > we? Let's see... 6000 users, 2Gb RAM+swap. They get about 300K each.
>> > > That's ALMOST enough to log in with!
>> >
>> > well then get more ram/swap. But at least it has become a hw issue.
>>
>> It's still not right - the assumption that you have 6000 concurrent users
>> on a 2GB based system is unreasonable. It is much more likely that there
>> is only 40-50 conccurent users. If the limit is 40 then it becomes 2GB/40
>> or ~50MB per user not 300K. After that what you have is a management decision
>> on system expansion, AND the data to back that decision. I have worked in
>> such an environment, and that's the way it is.
>
>I never said 6000 CONCURRENT users. BUT the system is perfectly happy
>without any limits on the number of users. We could have 200 users using
>an AVERAGE of 5Mb each quite happily - if they are all running the same
>couple of programs (a typical workload) overcommit makes the difference
>between needing 2Gb of swap and 20Gb, without *ANY* difference in
>functionality.

Really? Are you now saying that OOM conditions don't crash your system?

>The enormous overhead disabling overcommit would incur, without ANY
>benefits, are simply not practical.

very practical. I use them every day. Just not under linux.

>> > > The alternative just isn't practical in any real-world situation, except
>> > > more-or-less single user boxes.
>> >
>> > the alternative is very practical. boxes where oracle runs as one user /
>> > apache as another for example. You might want to lock down something
>> > that you know can cause problems.. etc..
>> >
>> > > Even then, that single user will still run
>> > > out of memory - it's just his quota, rather than a physical limit.
>> > >
>> >
>> > obviously.
>> >
>> > > So his processes STILL end up dying randomly, but they do it sooner rather
>> > > than later. Hrm. Wow.
>> > >
>> >
>> > but then it's the users problem - not the OS..
>> >
>> > > On any realistically specced box, it is almost zero already.
>> > >
>> >
>> > no it's not. a plain linux box today has a terrible tendency to die if
>> > an app misbehaves wrt to memory. It's that bad. (unless you think 1GB of
>> > RAM + 4GB's+ of swap is a reasonable spec for a desktop)
>> >
>> > per process limits go a long long way to solving this (easy with
>> > pam_limits) but not far enough. But per user limits would be a huge
>> > improvement...
>> >
>> > > No, it's a risk with *EVERY* OS.
>> >
>> > no it's not. A non overcommiting OS doesn't run out of VM for
>> > processes. it cleanly grants or denies memory requests. What the app
>> > does after that is not a kernel problem.
>>
>> I agree that is is not a problem with the os.
>> It can also prevent additional logins if the resources for the new login
>> are not available to the specific user attempting to log in. This is done
>> in a LOT of UNIX systems, as long as they have functioning, per user,
>> resource controls.
>
>Oh dear. The problem here is when the system runs out of VM and has to
>start denying malloc() requests. *NOT* that it has already allocated more
>VM than it has - it has allocated the maximum amount it is able to, and
>now needs to claw some back for legitimate processes.

The only reason to deny malloc is that the user has exceeded user limits.
The processes that may die is identified. The system does not need to "claw
some back" because if it is a ligitimate process, then there are resources
available. UNLESS the management has still decreed to exceed available
resources.
>
>Overcommit is, if anything, a partial SOLUTION here - not part of the
>problem.

No - it is a symptom. The problem is the lack of resource controls.
-------------------------------------------------------------------------
Jesse I Pollard, II
Email: pollard@cats-chateau.net

Any opinions expressed are solely my own.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:57    [W:0.042 / U:0.504 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site