lkml.org 
[lkml]   [2010]   [Sep]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: Deadlock possibly caused by too_many_isolated.
From
Date
On Wed, 2010-09-15 at 11:13 +0800, Wu Fengguang wrote:
> On Wed, Sep 15, 2010 at 11:06:40AM +0800, Wu Fengguang wrote:
> > On Wed, Sep 15, 2010 at 10:54:54AM +0800, Wu Fengguang wrote:
> > > On Wed, Sep 15, 2010 at 10:37:35AM +0800, Wu Fengguang wrote:
> > > > On Wed, Sep 15, 2010 at 10:23:34AM +0800, Neil Brown wrote:
> > > > > On Tue, 14 Sep 2010 20:30:18 -0400
> > > > > Rik van Riel <riel@redhat.com> wrote:
> > > > >
> > > > > > On 09/14/2010 07:11 PM, Neil Brown wrote:
> > > > > >
> > > > > > > Index: linux-2.6.32-SLE11-SP1/mm/vmscan.c
> > > > > > > ===================================================================
> > > > > > > --- linux-2.6.32-SLE11-SP1.orig/mm/vmscan.c 2010-09-15 08:37:32.000000000 +1000
> > > > > > > +++ linux-2.6.32-SLE11-SP1/mm/vmscan.c 2010-09-15 08:38:57.000000000 +1000
> > > > > > > @@ -1106,6 +1106,11 @@ static unsigned long shrink_inactive_lis
> > > > > > > /* We are about to die and free our memory. Return now. */
> > > > > > > if (fatal_signal_pending(current))
> > > > > > > return SWAP_CLUSTER_MAX;
> > > > > > > + if (!(sc->gfp_mask& __GFP_IO))
> > > > > > > + /* Not allowed to do IO, so mustn't wait
> > > > > > > + * on processes that might try to
> > > > > > > + */
> > > > > > > + return SWAP_CLUSTER_MAX;
> > > > > > > }
> > > > > > >
> > > > > > > /*
> > > > > >
> > > > > > Close. We must also be sure that processes without __GFP_FS
> > > > > > set in their gfp_mask do not wait on processes that do have
> > > > > > __GFP_FS set.
> > > > > >
> > > > > > Considering how many times we've run into a bug like this,
> > > > > > I'm kicking myself for not having thought of it :(
> > > > > >
> > > > >
> > > > > So maybe this? I've added the test for __GFP_FS, and moved the test before
> > > > > the congestion_wait on the basis that we really want to get back up the stack
> > > > > and try the mempool ASAP.
> > > >
> > > > The patch may well fail the !__GFP_IO page allocation and then
> > > > quickly exhaust the mempool.
> > > >
> > > > Another approach may to let too_many_isolated() use much higher
> > > > thresholds for !__GFP_IO/FS and lower ones for __GFP_IO/FS. ie. to
> > > > allow at least nr2 NOIO/FS tasks to be blocked independent of the
> > > > IO/FS ones. Since NOIO vmscans typically completes fast, it will then
> > > > very hard to accumulate enough NOIO processes to be actually blocked.
> > > >
> > > >
> > > > IO/FS tasks NOIO/FS tasks full
> > > > block here block here LRU size
> > > > |-----------------|--------------------------|-----------------------|
> > > > | nr1 | nr2 |
> > >
> > > How about this fix? We may need very high threshold for NOIO/NOFS to
> > > prevent possible regressions.
> >
> > Plus __GFP_WAIT..
>
> Ah sorry! __GFP_WAIT cannot afford to wait by definition..
>
> ---
> diff --git a/mm/vmscan.c b/mm/vmscan.c
> index 225a759..becc63a 100644
> --- a/mm/vmscan.c
> +++ b/mm/vmscan.c
> @@ -1135,10 +1135,14 @@ static int too_many_isolated(struct zone *zone, int file,
> struct scan_control *sc)
> {
> unsigned long inactive, isolated;
> + int ratio;
>
> if (current_is_kswapd())
> return 0;
>
> + if (!(sc->gfp_mask & __GFP_WAIT))
> + return 0;
> +
it appears __GFP_WAIT allocation doesn't go to direct reclaim.



\
 
 \ /
  Last update: 2010-09-15 05:21    [from the cache]
©2003-2014 Jasper Spaans. Advertise on this site