lkml.org 
[lkml]   [2000]   [Mar]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectOvercommitable Memory...

I don't understand the whole situation.

Why should you EVER be out of memory if you aren't out of swap in an OS
that supposedly supports demand memory paging?

Why, in a pinch, isn't the "most idle" page(s) paged out to make room for
the latest request and pages brought in by a page fault when they aren't
presently mapped and are needed?

From my perspective, slow operation is better than processes being
randomly killed. I've had a web server running on a machine with 96MB of RAM,
and after several days, Linux will start randomly killing things, and
inevitably it will clobber processes like nfsd, inetd, portmapper, etc, things
essential to the proper operation of the system.

And then IF you really truely run out of memory including swap, at that
point refuse additional requests for memory, including fork/exec'ing new
processes rather than killing off existing processes.

Maybe it's just me but the current Linux behavior is almost as bad as the
Xenix that used to run on Tandy 6000's, which being MC68000 based (a CPU which
has no restart instruction) could not support demand paging (at least not
without swapping the 68000 for a 68010 and tweaking the kernel which some
people actually did).

There are a lot of areas where Linux is superior to other Unix's, but this
is one area where Linux really sucks.



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:57    [W:0.023 / U:0.048 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site