lkml.org 
[lkml]   [2009]   [Jul]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH -mm] throttle direct reclaim when too many pages are isolated already
On Wed, 15 Jul 2009 23:28:14 -0400 Rik van Riel <riel@redhat.com> wrote:

> Andrew Morton wrote:
> > On Wed, 15 Jul 2009 23:10:43 -0400 Rik van Riel <riel@redhat.com> wrote:
> >
> >> Andrew Morton wrote:
> >>> On Wed, 15 Jul 2009 22:38:53 -0400 Rik van Riel <riel@redhat.com> wrote:
> >>>
> >>>> When way too many processes go into direct reclaim, it is possible
> >>>> for all of the pages to be taken off the LRU. One result of this
> >>>> is that the next process in the page reclaim code thinks there are
> >>>> no reclaimable pages left and triggers an out of memory kill.
> >>>>
> >>>> One solution to this problem is to never let so many processes into
> >>>> the page reclaim path that the entire LRU is emptied. Limiting the
> >>>> system to only having half of each inactive list isolated for
> >>>> reclaim should be safe.
> >>>>
> >>> Since when? Linux page reclaim has a bilion machine years testing and
> >>> now stuff like this turns up. Did we break it or is this a
> >>> never-before-discovered workload?
> >> It's been there for years, in various forms. It hardly ever
> >> shows up, but Kosaki's patch series give us a nice chance to
> >> fix it for good.
> >
> > OK.
> >
> >>>> @@ -1049,6 +1070,10 @@ static unsigned long shrink_inactive_lis
> >>>> struct zone_reclaim_stat *reclaim_stat = get_reclaim_stat(zone, sc);
> >>>> int lumpy_reclaim = 0;
> >>>>
> >>>> + while (unlikely(too_many_isolated(zone, file))) {
> >>>> + schedule_timeout_interruptible(HZ/10);
> >>>> + }
> >>> This (incorrectly-laid-out) code is a no-op if signal_pending().
> >> Good point, I should add some code to break out of page reclaim
> >> if a fatal signal is pending,
> >
> > We can't just return NULL from __alloc_pages(), and if we can't
> > get a page from the freelists then we're just going to have to keep
> > reclaiming. So I'm not sure how we can do this.
>
> If we are stuck at this point in the page reclaim code,
> it is because too many other tasks are reclaiming pages.
>
> That makes it fairly safe to just return SWAP_CLUSTER_MAX
> here and hope that __alloc_pages() can get a page.
>
> After all, if __alloc_pages() thinks it made progress,
> but still cannot make the allocation, it will call the
> pageout code again.

Which will immediately return because the caller still has
fatal_signal_pending()?



\
 
 \ /
  Last update: 2009-07-16 05:41    [W:0.230 / U:0.564 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site