Messages in this thread |  | | Date | Mon, 24 Jun 1996 10:57:09 -0500 | From | (Andrew C. Esh) | Subject | Re: Virtual memory exhausted? |
| |
>>>>> "Joel" == Joel Young <jdyoung@erinet.com> writes:
Joel> I tried this the other day (I have 96Mb) and also ran out of Joel> virtual memory (96Mb + 36Mb swap). Maybe we need newer Joel> versions of make? I can put a number in like make -j 10 and Joel> that works fine.
Joel> Joel jdyoung@erinet.com
Joel> On 11 Jun 1996, Simon Josefsson wrote:
>> When I try to compile the kernel with "make -j zImage" it just >> stops with an "Virtual memory exhausted" at ~60 % done, I >> thought that was due to all the swapping so I upgraded to 64MB >> (had 32MB before) and I still can't do it. >> >> It doesn't access the disc much so the swap couldn't be =that= >> exhausted. Any clues to get this to work? >> >> How much memory are you running on, when compiling >> (successfully) with -j? >> >> Running linux2, 64MB, P133, AHA2940, /usr/src is IBM 1GB. I >> still only have 256kb of cache (I'm shopping for more) so it >> isn't much faster than with 32MB but I think that wouldn't >> affect the virtual-memory-thing.
Here's my make:
andrewes:/Z$ make --version GNU Make version 3.74, by Richard Stallman and Roland McGrath. Copyright (C) 1988, 89, 90, 91, 92, 93, 94, 95 Free Software Foundation, Inc. (yada yada yada)
I noticed that make -j can lead to trouble while compiling the kernel. The -j options will spawn as many processes as there are targets, and some of the kernel sections (net) have quite a few targets (> 40?). I have found that the best make settings for me are -j5 -l5, which won't spawn any more than 5 processes, and won't spawn more than one process if the load average is above 5. I put these arguments into the main Makefile (I change the value of $MAKE), since they won't get propagated to the sub-makes otherwise. (I.e., command line aruments to make only take effect in the top level directory.)
I have also noticed, however that make will sometimes "get loose" from those restrictions, and spawn processes for the rest of the targets. This happens when the load average falls below the -l limit, after being above it, and throttled. It seems make forgets about the -j setting at that point. I haven't reproduced this with make 3.74 for certain, yet, but previous versions of make did this to me all the time.
BTW: I have 32MB memory, and another 73MB in two swap partitons. If I do a plain make -j, I will run out of that. The end comes when "free" shows that swap has run out. My load average at that time is over 15, the machine is almost completely unresponsive, and the disk is thrashing.
Also, unless you have an extremely fast processor (mine is 486/66), then having more swap than I do and doing a make -j won't get it done any faster. As I noted above, once you get a large number of processes running, the machine stops making progress. It spends most of its time doing housekeeping. I would be willing to bet that the kernel source and the compiler executable have reached such a size that a make -j on a 64MB 486/66 will also freeze. I'm not sure though.
Lately, rather than edit the Makefile and later have a patch fail, I just live with single target makes. I read mail and browse the Web while the kernel compiles.
"Oh Linus, your source code is so large, and my CPU is so small."
--- Andrew C. Esh mailto:andrew_esh@cnt.com Computer Network Technology andrewes@mtn.org (finger for PGP key) 6500 Wedgwood Road 612.550.8000 (main) Maple Grove MN 55311 612.550.8229 (direct) http://www.cnt.com - CNT Inc. Home Page http://www.mtn.org/~andrewes - ACE Home Page
|  |