Messages in this thread |  | | Date | Wed, 11 Dec 1996 08:57:59 -0500 (EST) | From | "Richard B. Johnson" <> | Subject | Re: Memory intensive processes |
| |
On Tue, 10 Dec 1996, William Burrow wrote:
> > On Tue, 10 Dec 1996, Richard B. Johnson wrote: > > > Amongst other things, the VAX has a "modfied page writer". It works like > > this. When a process is allocated memory, the initial memory comes from > > a pool of shared zero-filled pages. These pages don't actually get > > owned by a specific process until a process actually writes to one. [SNIP] > VAX/VMS (and quite recently too). Has this been implemented in Linux? > Is somebody planning to implement it? Was it you who wrote me about this > before??? Deja vu on this. > I have written about this before. Too often I get interrupted with a "work break" so I haven't mucked around with the kernel except to help fix some occasional problems in drivers.
> Consider you have a process with a very large set of matrices. Most of > these could be sparse (eg mostly zeros). The IEEE representation of > floating point zeroes is all zeroes. Therefore, the scheme you mention > could in fact suitably represent in a single page a large chunk of memory > that would otherwise be wasted (filled with zeroes). This alone could get > some of the large process blues off of Linux' back. > This is true. Also the act of READING zero-filled pages should not cause a trap to the operating system. Many FFTs null-fill a lot of RAM and certainly never write to it.
> > VAX/VMS has quotas on just about everything. The maximum working-set > > size, i.e., the maximum virtual pages that a process can own, is > > set via AUTHORIZE. Further, SYSGEN parameters also set sizes system- > > wide. > > I once heard a joke that VMS would log when a user sneezed. Most > Unixheads don't seem to like VMS all that much, though it had some > very good ideas.
VAXen were designed in the days of slow, poor performing hardware. We can learn a lot from the sucesses of VAX/VMS, but should not copy its failures. Memory management is one area in which it excelled. Can you imagine; 35 users compiling FORTRAN on a system with 4 megabytes of RAM?
Yes it worked. Of course there was a lot of help from some of the hardware. The tty boards had seperate CPUs that handled all the escape-sequences, etc. The CPU was never called upon to write pages of Screen memory a'la Xwin.
> {excellent ideas elided for brevity} > > > Now, what this does is help prevent a runaway task from taking all the > > system resources. If your task is a memory hog, it gets slowed down > > by this allocation strategy while other tasks end up using CPU time > > stolen from the memory hog. > > So, that would be half an answer anyway. I don't see how the kernel can > do much about how a process decides to access memory. Gnuchess is > particularly bad on memory restricted systems (that guy with the 8meg RAM > 386 ought to give it a shot to see what this problem is about). The days > of assuming infinite, high speed memory are slowly moving away also, as > CPUs ramp to faster clocks and depend more on cache memory. > I just got a patch from "b" which causes the swapper to sleep, i.e., give the CPU to someone else after some hard paging. I will try it.
|  |