Messages in this thread |  | | Date | Fri, 18 May 2001 19:45:15 +0200 (CEST) | From | Mike Galbraith <> | Subject | Re: Linux 2.4.4-ac10 |
| |
On Fri, 18 May 2001, Rik van Riel wrote:
> On Fri, 18 May 2001, Mike Galbraith wrote: > > On Thu, 17 May 2001, Rik van Riel wrote: > > > On Thu, 17 May 2001, Mike Galbraith wrote: > > > > > > > Only doing parallel kernel builds. Heavy load throughput is up, > > > > but it swaps too heavily. It's a little too conservative about > > > > releasing cache now imho. (keeping about double what it should be > > > > with this load.. easily [thump] tweaked;) > > > > > > "about double what it should be" ? > > > > Do you think there's 60-80mb of good cachable data? ;-) The "double" > > is based upon many hundreds of test runs. I "know" that performance > > is best with this load when the cache stays around 25-35Mb. I know > > this because I've done enough bend adjusting to get throughput to > > within one minute of single task times to have absolutely no doubt. > > I can get it to 30 seconds with much obscene tweaking, and have done > > it with zero additional overhead for make -j 30 ten times in a row. > > (that kernel was.. plain weird. perfect synchronization.. voodoo!) > > Ahhh, I see. Remember that the "cached" figure you are > seeing also includes swap-cached data from the gccs, which > results from kswapd scanning the processes, clearing the > PTE and, a bit later, the process grabbing the page again.
Yes.
> I suspect that if the gccs _just_ fit in memory, you can > get some extra performance by mercilessly eating from the > cache and keeping the ggcs in memory. However, I also have > the sneaking suspicion that this is not the best tactic for > all workloads ;)
Yes, ~exactly! I chose 30 tasks because they almost do (tool/userland dependant.. must recalibrate often) fit. The bitch is to get the vm to automagically detect the rss/cache munch tradeoff point without all the manual help.
-Mike
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
|  |