Messages in this thread |  | | Date | Wed, 31 Oct 2001 18:42:56 +0100 | From | Stephan von Krawczynski <> | Subject | Re: new OOM heuristic failure (was: Re: VM: qsbench) |
| |
On Wed, 31 Oct 2001 14:04:45 -0200 (BRST) Rik van Riel <riel@conectiva.com.br> wrote:
> On Wed, 31 Oct 2001, Linus Torvalds wrote: > > > I could probably argue that the machine really _is_ out of memory at this > > point: no swap, and it obviously has to work very hard to free any pages. > > Read the "out_of_memory()" code (which is _really_ simple), with the > > realization that it only gets called when "try_to_free_pages()" fails and > > I think you'll agree. > > Absolutely agreed, an earlier out_of_memory() is probably a good > thing for most systems. The only "but" is that Lorenzo's test > program runs fine with other kernels, but you could argue that > it's a corner case anyway...
I took a deep look into this code and wonder how this benchmark manages to get killed. If I read that right this would imply that shrink_cache has run a hundred times through the _complete_ inactive_list finding no free-able pages, with one exception that I read across:
int max_mapped = nr_pages*10; ... page_mapped: if (--max_mapped >= 0) continue;
/* * Alert! We've found too many mapped pages on the * inactive list, so we start swapping out now! */ spin_unlock(&pagemap_lru_lock); swap_out(priority, gfp_mask, classzone); return nr_pages;
Is it possible, that this does a too early exit from shrink_cache? I don't know how much mem Lorenzo has, but running only once through several hundred MB of inactive list is a notable time in my system, running a hundred times through could be far more than 70 s. But if there's no complete run, you cannot state to really be oom. Does it make sense to stop shrink_cache when having detected 4k * 32 * 10 = 1280 k of mapped mem on the inactive list of possibly several hundred MB in size?
Regards, Stephan
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
|  |