lkml.org 
[lkml]   [2001]   [Sep]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: broken VM in 2.4.10-pre9

On Mon, 17 Sep 2001, Jan Harkes wrote:

> On Mon, Sep 17, 2001 at 02:33:12PM +0200, Daniel Phillips wrote:
> > The inactive queues have always had both mapped and unmapped pages on
> > them. The reason for unmapping a swap cache page page when putting it
>
> So the following code in refill_inactive_scan only exists in my
> imagination?
>
> if (page_count(page) <= (page->buffers ? 2 : 1)) {
> deactivate_page_nolock(page);

No, but I agree with Daniel that it's wrong.

The reason it exists there is because the current inactive_clean list
scanning doesn't have any pressure into VM scanning, so if we'd let mapped
pages on the inactive queue, then reclaim_page() would be unhappy about
them.

That can be solved several ways:
- like we do now. Hackish and wrong, but kind-of-works.
- make reclaim_page() have the ability to do vm scanning pressure (ie if
it starts noticing that there are too many mapped pages on the reclaim
list, it should cause VM scan)
- physical maps

Actually, now that I look at it, the lack of de-activation actually hurts
page_launder() - which doesn't get to launder pages that are still mapped
(even though getting rid of buffers from them would almost certainly be
good under memory pressure).

Linus

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:03    [W:0.133 / U:0.200 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site