lkml.org 
[lkml]   [1996]   [Dec]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: memory crash

On Tue, 10 Dec 1996, William Burrow wrote:
> > Just for the hell of it, I decided to see what would happen if I
> > overloaded my system with cpu intensive programs which would cause the
> > system to run out of memory and swap. The system started thrashing, and
> > ultimately required a hard boot. I don't think this is healthy. Any user
> > could do this.
>
> Is this news to you guys? This ran by the newsgroups a few weeks ago. Do
> you guys read the newsgroups? Do you want, say, two line summaries of the
> latest brouhahas on the newsgroups and length of such threads?
Which of the 11 newsgroups had this thread? Was it 'any one can crash
your system' or something like that? Sorry, did not read it. Sounded too
much like 'Anyone can make an enormous amount of money.' I look to the
kernel mailing list and the RedHat mailing list for more serious stuff.
Don't have time to read everything in the newsgroups.

>
> > I have some questions:
> > 1. Is it possible to set limits on accounts so that
> > a. this kind of thing cannot be done ?
>
> Under certain circumstances, the shell gets swapped out to make space for
> the offending program, and cannot be swapped back to enforce ulimits. So
> ulimits won't work. (This is the case on my machine, when I thought I
> would be so smart to setup the ulimits fairly conservatively).
>
> > 2. What should be done to insulate the system against this kind of thing?
>
> Better swap scheme. Linux performs horribly under heavy swapping.
> Memory is getting cheaper though, so maybe the impetus is slowly moving
> away. (eg Imagine 200 meg Gaussian problems running on a system with 64
> meg... apparently a lowly microVax can handle this but not Linux.)
>
> [horrible complex code elided]
>
> A simple malloc() bomb will do.
>

My point was not to show a simple way to crash the system. It was to
ask about ways to control errant programs. The defaults should be
set up to do this automatically. I am surprised they aren't.

I forgot about ulimit, so I went back to find out about it.

The linux man page on ulimit is non-existent. I had to go to my
Solaris machine in the office to find a man page.

OK, so there is a 'ulimit -v' to set virtual memory limits. Is that it
in linux? Nope!

[sen1@elsie cprogs]$ ulimit -v 500
ulimit: cannot raise limit: Invalid argument

Next, try 'ulimit -m'. OK, can set memory limits.

Is Linux an Operating System or a puzzle?

Don't get me wrong. I think Linux is great. Shows even greater promise.
And I think you developers are doing great things.

But, c'mon, the default setup on the box should not make the sysadmin
check more or less everything to make sure there is no danger. There
ought to be a simple procedure in the kernel to kill runaway processes.
Because a user doesn't know enough about how to run things is no reason
why the whole system should get killed.

Bet this doesn't happen with Solaris. I'll try tomorrow.
Wonder if it happens in FreeBSD.

I have been arguing positively about the modern 'stability and
advantages' of Linux with my friends for months now.

'Not just for hackers'.
'More secure and functional than NT.'
'Linux on Intel is a great cost-effective alternative to Solaris on SUNS
in a production environment'
etc. etc.

No more ranting, now. I am just amazed. One answer to my post that says
this isn't a kernel issue, and now an answer which says:

'We all know this, where have you been?'

As if this is just the most natural thing in the world,
and why would one expect anything esle?

It isn't supposed to be this way, period!

-sen


\
 
 \ /
  Last update: 2005-03-22 13:38    [W:0.090 / U:0.100 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site