lkml.org 
[lkml]   [2009]   [Jul]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [RFC PATCH 2/2] Don't continue reclaim if the system have plenty free memory
From
On Thu, Jul 9, 2009 at 2:08 PM, KOSAKI
Motohiro<kosaki.motohiro@jp.fujitsu.com> wrote:
>> Hi, Kosaki.
>>
>> On Tue, Jul 7, 2009 at 6:48 PM, KOSAKI
>> Motohiro<kosaki.motohiro@jp.fujitsu.com> wrote:
>> > Subject: [PATCH] Don't continue reclaim if the system have plenty free memory
>> >
>> > On concurrent reclaim situation, if one reclaimer makes OOM, maybe other
>> > reclaimer can stop reclaim because OOM killer makes enough free memory.
>> >
>> > But current kernel doesn't have its logic. Then, we can face following accidental
>> > 2nd OOM scenario.
>> >
>> > 1. System memory is used by only one big process.
>> > 2. memory shortage occur and concurrent reclaim start.
>> > 3. One reclaimer makes OOM and OOM killer kill above big process.
>> > 4. Almost reclaimable page will be freed.
>> > 5. Another reclaimer can't find any reclaimable page because those pages are
>> > ? already freed.
>> > 6. Then, system makes accidental and unnecessary 2nd OOM killer.
>> >
>>
>> Did you see the this situation ?
>> Why I ask is that we have already a routine for preventing parallel
>> OOM killing in __alloc_pages_may_oom.
>>
>> Couldn't it protect your scenario ?
>
> Can you please see actual code of this patch?

I mean follow as,

static inline struct page *
__alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order,
struct zonelist *zonelist, enum zone_type high_zoneidx,
...
<snip>

/*
* Go through the zonelist yet one more time, keep very high watermark
* here, this is only to catch a parallel oom killing, we must fail if
* we're still under heavy pressure.
*/
page = get_page_from_freelist(gfp_mask|__GFP_HARDWALL, nodemask,
order, zonelist, high_zoneidx,
ALLOC_WMARK_HIGH|ALLOC_CPUSET,
preferred_zone, migratetype);


> Those two patches fix different problem.
>
> 1/2 fixes the issue of that concurrent direct reclaimer makes
> too many isolated pages.
> 2/2 fixes the issue of that reclaim and exit race makes accidental oom.
>
>
>> If it can't, Could you explain the scenario in more detail ?
>
> __alloc_pages_may_oom() check don't effect the threads of already
> entered reclaim. it's obvious.

Threads which are entered into direct reclaim mode will call
__alloc_pages_may_oom before out_of_memory.
At that time, if one big process is killed a while ago,
get_page_from_freelist in __alloc_pages_may_oom will be succeeded at
last. So I think it doesn't occur OOM.

But in that case, we suffered from unnecessary page scanning per each
priority(12~0). So in this case, your patch is good to me. then you
would be better to change log. :)

--
Kind regards,
Minchan Kim


\
 
 \ /
  Last update: 2009-07-09 13:01    [W:0.053 / U:0.732 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site