lkml.org 
[lkml]   [2009]   [May]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH 1/5] mm: Add __GFP_NO_OOM_KILL flag
Date
On Friday 08 May 2009, Rafael J. Wysocki wrote:
> On Friday 08 May 2009, Wu Fengguang wrote:
[--snip--]
> > But hey, that 'count' counts "savable+free" memory.
> > We don't have a counter for an estimation of "free+freeable" memory,
> > ie. we are sure we cannot preallocate above that threshold.
> >
> > One applicable situation is, when there are 800M anonymous memory,
> > but only 500M image_size and no swap space.
> >
> > In that case we will otherwise goto the oom code path. Sure oom is
> > (and shall be) reliably disabled in hibernation, but still we shall be
> > cautious enough not to create a low memory situation, which will hurt:
> > - hibernation speed
> > (vmscan goes mad trying to squeeze the last free page)
> > - user experiences after resume
> > (all *active* file data and metadata have to reloaded)
>
> Strangely enough, my recent testing with this patch doesn't confirm the
> theory. :-) Namely, I set image_size too low on purpose and it only caused
> preallocate_image_memory() to return NULL at one point and that was it.
>
> It didn't even took too much time.
>
> I'll carry out more testing to verify this observation.

I can confirm that even if image_size is below the minimum we can get,
the second preallocate_image_memory() just returns after allocating fewer pages
that it's been asked for (that's with the original __GFP_NO_OOM_KILL-based
approach, as I wrote in the previous message in this thread) and nothing bad
happens.

That may be because we freeze the mm kernel threads, but I've also tested
without freezing them and it's still worked the same way.

> > The current code simply tries *too hard* to meet image_size.
> > I'd rather take that as a mild advice, and to only free
> > "free+freeable-margin" pages when image_size is not approachable.
> >
> > The safety margin can be totalreserve_pages, plus enough pages for
> > retaining the "hard core working set".
>
> How to compute the size of the "hard core working set", then?

Well, I'm still interested in the answer here. ;-)

Best,
Rafael


\
 
 \ /
  Last update: 2009-05-09 03:35    [W:0.846 / U:0.024 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site