lkml.org 
[lkml]   [2008]   [Nov]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] vmscan: bail out of page reclaim after swap_cluster_max pages
2008/11/25 Rik van Riel <riel@redhat.com>:
> KOSAKI Motohiro wrote:
>>>
>>> Sometimes the VM spends the first few priority rounds rotating back
>>> referenced pages and submitting IO. Once we get to a lower priority,
>>> sometimes the VM ends up freeing way too many pages.
>>>
>>> The fix is relatively simple: in shrink_zone() we can check how many
>>> pages we have already freed, direct reclaim tasks break out of the
>>> scanning loop if they have already freed enough pages and have reached
>>> a lower priority level.
>>>
>>> However, in order to do this we do need to know how many pages we already
>>> freed, so move nr_reclaimed into scan_control.
>>>
>>> Signed-off-by: Rik van Riel <riel@redhat.com>
>>> ---
>>> Kosaki, this should address the zone scanning pressure issue.
>>
>> hmmmm. I still don't like the behavior when priority==DEF_PRIORITY.
>> but I also should explain by code and benchmark.
>
> Well, the behaviour when priority==DEF_PRIORITY is the
> same as the kernel's behaviour without the patch...


Yes, but I think it decrease this patch's valueable...



>> therefore, I'll try to mesure this patch in this week.
>
> Looking forward to it.

thank you.


\
 
 \ /
  Last update: 2008-11-25 15:33    [W:0.143 / U:0.024 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site