Messages in this thread | | | From | kwr@kwr ... | Subject | Re: VM: question | Date | Tue, 14 Apr 1998 12:24:32 -0500 (CDT) |
| |
And lo, Rik van Riel saith unto me: > On 5 Apr 1998, Eric W. Biederman wrote: > > I'm suffering from memory fragmentation slow downs, and my machine > > just has to be up for a while before memory gets sufficiently > > fragmented to cause trouble :( > > > > Anyone want to explain to me really slowly why we try to keep huge > > chunks of contiguous memory? > DMA, 8k NFS fragments, etc... Those are good reasons for being able to *allocate* contiguous memory when we need it. Unfortunately, Linux's current algorithm is "throw out random stuff until you luck onto a big hunk of contiguous free memory", which sucks even on high-memory machines. I tried to get reverse lookups implemented at one point, but things kept changing under me and I gave up...there's way too many places you have to change, IMHO...
> > And why if that is important why we don't implement a relocating > > defragmentation algorithm in the kernel? On the assumption that I > > could pause the kernel for a moment, it would be probably faster to do > > that on demand, then the current mess! > Even better, make sure that fragmentation doesn't occur very > often by freeing pages on demand before we use a free page > from a big free area. But if/since large contiguous areas aren't required very often, and low-memory machines may want what's in those pages much more often than they (e.g.) initialize the floppy driver, keeping these areas free is just as bad as find /lib/modules -exec modprobe {} \;... > > There is a defragmentation algorithm that runs in O(mem_size) time > > with two passes over memory, and needs no extra memory. > But it does need huge amounts of CPU time... memcpy() isn't > exactly cheap :( I'd much rather memcpy() a page than wait on the disk, even on a slow 386...
Keith
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu
| |