Messages in this thread |  | | Date | Sat, 14 Dec 1996 04:01:19 -0500 (EST) | From | Kevin M Bealer <> | Subject | Don't beat me up either (mm suggestions) |
| |
On 26 Nov 1996, Kevin Buhr wrote:
> An amusing anecdote: > > One day, right out of the blue, my poor little 8 meg machine went > loco. It began generating reams and reams of "Couldn't get a free > page" messages. I was away from the console, and it churned madly > away for several hours before I was able to power cycle it. > > "Fortunately", I'd added the priority and size to the "Couldn't get a > free page" message in my kernel (2.0.13 vintage, I believe), and I > immediately realized that I was seeing request after request for a > 2-page block at GFP_NFS priority. Eventually, I traced it back to > this culprit in "fs/nfs/proc.c": (clip)
Have a rubber mallet handy, I may need some smacks, if it was this simple, it would already be in there. (In fact it may already be...)
... but I had two "ideas" on this and am wondering whether they will work (and probably, why they will not).
First, if the kernel wants 16K of memory, why doesn't it look for 16K in one place. In other words, instead of trying to free the pages one at a time, try to grab the group of pages that have the oldest cumulative "age" or "latency" _and_ are contiguous. While this seems quite expensive, it would actually run in linear time. For a block of (n) pages, you would loop through, with sum = sum - age[i] + age[i+n]; to get each new sum. Keep track of the stalest-so-far, and then free mem[i] through mem[i+n]. Obviously the staleness of an unfreeable block is very very low. (Maybe the current method runs in constant or O(log n) time or something, then ignore this.)
Second (probably more viable): would it make sense for the kernel to keep a bunch of (contiguous) memory set aside, say 128K or a working twice or thrice the probable-maximum for DMA and other immediate needs, and use it as a read-only-caching, so you can always throw it away without blinking. If a user should write one of the read-only sectors, the system puts the waiting to be written data elsewhere.
There could perhaps be a garbage collection process, which would run when the system had a smaller than desireable amount of contiguous ram in this state; it would try to "grow" these areas, by marking areas of memory as targets, and these would be migrated to only-read-buffering by forbidding other things, encouraging read buffering, and eventually removing non-discardeable pages.
When enough blocks of memory were acrued, the garbage collector task would be idle (except the system would obey the marked purposes of the contiguous areas) and the system would run as normal. When the amount got too small, the garbage collector would run, the smaller memory left, the more aggressively. The percent-time spent on garbage collection would be a curve such that the system would never run below a fixed value, ie the garbage collector would tend toward 100 % CPU when the system had the minimal amount -- in practice only if DMA I/O was the only activity would the GC get much CPU.
So, is this feasible?
--kmb203@psu.edu---------------Debian/GNU--1.2---Linux--2.0.25--- Develop free apps? http://www.jagunet.com/~braddock/fslu/org ----------------------------------------------------------------- "You are in a maze of twisty little passages, all alike."
|  |