lkml.org 
[lkml]   [1998]   [Jul]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: Strange interrupt behaviour


On Mon, 13 Jul 1998, Linus Torvalds wrote:

> Note that we actually _do_ have code that tries to keeps memory free
> enough to allocate dual pages - that's what the kswapd deamon is there
> for.

A blind algorithm that would ensure that 1 dual page is available should
try to keep free half of the memory + 1 PAGE.

> One of the problems is that the kswapd deamon is completely asynchronous,
> which means that yes, it will free up pages in the background, but that
> doesn't help if at the moment when you wanted two pages they weren't
> there: __get_free_pages() at no point tries to wait for kswapd to do its
> thing. So there is memory available, it's just right now busy being
> swapped out..

We donnot need dual pages very often.
Swapping old/unused pages in the background to maintain some free memory
is fine in order to have it when we need it. This works great for single
page allocation which shall be the general case, IMO.

A couple of months ago I tried to simulate the worst algorithm of memory
management that worked as follow:

1 - Assume all the memory is busy.
2 - Free randomly 1 page at a time until N pages are available.

I did'nt retreive the results, but I remember that even for N=2 results
were quite bad (something > 25 % memory uselessly reaped).

I agree that getting help for kswapd in order to get a dual PAGE is the
right thing to do, but in this situation kswap should preferently try to
victimize pages that can _actually_ help free a dual page.
(This only can work if we can wait for the allocation).

How would you proceed for a similar problem in real life?
Would'nt you try to be carefull to only try to victimize only things
that have real chances to lead to the required result?

> This is why fork() normally works, but then sometimes when the system is
> busy doing a lot of things the free memory pool has been temporarily
> depleted because kswapd hasn't had time to react to things yet, and you
> get a fork() failure.

My thought is that this work since buddy is carefull with regard to
fragmentation and shrink_mmap() scan the page map in a sequential manner
which is not bad at all also for fragmentation.
If we are very low on memory, we cannot count on that for dual page
allocation.

> And this is why I think that there should be some fairly simple approach
> to fixing it.. It might be as simple as a "wait for kswapd" thing after
> we have failed to allocate something.

Agreed.
Wait for kswap doing the _right_ job according to the circumstances.

> There is a second problem, which is that we often don't select the right
> pool of pages to throw out. There are obvious problems on low-memory
> machines that seem to get fragmented by inodes and dentries. The mm code
> has code to dispose of the dentry cache when it's low on memory, but that
> doesn't seem to be triggered as well as it should be (and I suspect that
> one of the reasons is the code that looks like
>
> if (((buffermem >> PAGE_SHIFT) * 100 > buffer_mem.borrow_percent * num_physpages)
> || (page_cache_size * 100 > page_cache.borrow_percent * num_physpages))
> state = 0;
>
> which will "reset" the swap-out state to try to get rid of the page cache
> and the buffer cache, but it will also mean that the code that tries to
> shrink the dcache won't be reached very easily.. The above code in turn
> was a trial to try to get the swapper to be more aggressive in throwing
> out page cache and buffer pages, and it may be that it backfired in other
> ways..

The page cache is very efficient but very aggressive and people want to
preferently shrink it, but pages in the page cache are very easy to reap
synchronously. On the other hand, shrink_mmap does not do a bad job for
fragmentation. So, I think that the page cache being large is not a
problem for dual PAGE allocation.
The balancing between the page cache, the buffer cache and swap is another
story...

Regards,
Gerard.



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.altern.org/andrebalsa/doc/lkml-faq.html

\
 
 \ /
  Last update: 2005-03-22 13:43    [W:0.169 / U:0.400 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site