lkml.org 
[lkml]   [2011]   [Mar]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH 1/5] vmscan: remove all_unreclaimable check from direct reclaim path completely
Date
Hi

> Thanks for your effort, Kosaki.
> But I still doubt this patch is good.
>
> This patch makes early oom killing in hibernation as it skip
> all_unreclaimable check.
> Normally, hibernation needs many memory so page_reclaim pressure
> would be big in small memory system. So I don't like early give up.

Wait. When occur big pressure? hibernation reclaim pressure
(sc->nr_to_recliam) depend on physical memory size. therefore
a pressure seems to don't depend on the size.


> Do you think my patch has a problem? Personally, I think it's very
> simple and clear. :)

To be honest, I dislike following parts. It's madness on madness.

static bool zone_reclaimable(struct zone *zone)
{
if (zone->all_unreclaimable)
return false;

return zone->pages_scanned < zone_reclaimable_pages(zone) * 6;
}


The function require a reviewer know

o pages_scanned and all_unreclaimable are racy
o at hibernation, zone->all_unreclaimable can be false negative,
but can't be false positive.

And, a function comment of all_unreclaimable() says

/*
* As hibernation is going on, kswapd is freezed so that it can't mark
* the zone into all_unreclaimable. It can't handle OOM during hibernation.
* So let's check zone's unreclaimable in direct reclaim as well as kswapd.
*/

But, now it is no longer copy of kswapd algorithm.

If you strongly prefer this idea even if you hear above explanation,
please consider to add much and much comments. I can't say
current your patch is enough readable/reviewable.

Thanks.




\
 
 \ /
  Last update: 2011-03-24 07:19    [W:0.070 / U:1.764 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site