lkml.org 
[lkml]   [2013]   [Oct]   [31]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 3.5 29/64] fs: buffer: move allocation failure loop into the allocator
On Thu, Oct 31, 2013 at 10:00:08AM -0400, Johannes Weiner wrote:
> This is part of a bigger series and was tagged for stable as a
> reminder only. Please don't apply for now.

Grrr... I need to start cleaning my email inbox before doing a
release. I just saw the discussion in stable@.

I'll do an emergency release reverting this patch. Thanks for
catching this.

Cheers,
--
Luis


>
> On Mon, Oct 28, 2013 at 02:47:48PM +0000, Luis Henriques wrote:
> > 3.5.7.24 -stable review patch. If anyone has any objections, please let me know.
> >
> > ------------------
> >
> > From: Johannes Weiner <hannes@cmpxchg.org>
> >
> > commit 84235de394d9775bfaa7fa9762a59d91fef0c1fc upstream.
> >
> > Buffer allocation has a very crude indefinite loop around waking the
> > flusher threads and performing global NOFS direct reclaim because it can
> > not handle allocation failures.
> >
> > The most immediate problem with this is that the allocation may fail due
> > to a memory cgroup limit, where flushers + direct reclaim might not make
> > any progress towards resolving the situation at all. Because unlike the
> > global case, a memory cgroup may not have any cache at all, only
> > anonymous pages but no swap. This situation will lead to a reclaim
> > livelock with insane IO from waking the flushers and thrashing unrelated
> > filesystem cache in a tight loop.
> >
> > Use __GFP_NOFAIL allocations for buffers for now. This makes sure that
> > any looping happens in the page allocator, which knows how to
> > orchestrate kswapd, direct reclaim, and the flushers sensibly. It also
> > allows memory cgroups to detect allocations that can't handle failure
> > and will allow them to ultimately bypass the limit if reclaim can not
> > make progress.
> >
> > Reported-by: azurIt <azurit@pobox.sk>
> > Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
> > Cc: Michal Hocko <mhocko@suse.cz>
> > Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
> > Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
> > Signed-off-by: Luis Henriques <luis.henriques@canonical.com>
> > ---
> > fs/buffer.c | 14 ++++++++++++--
> > mm/memcontrol.c | 2 ++
> > 2 files changed, 14 insertions(+), 2 deletions(-)
> >
> > diff --git a/fs/buffer.c b/fs/buffer.c
> > index 2c78739..2675e5a 100644
> > --- a/fs/buffer.c
> > +++ b/fs/buffer.c
> > @@ -957,9 +957,19 @@ grow_dev_page(struct block_device *bdev, sector_t block,
> > struct buffer_head *bh;
> > sector_t end_block;
> > int ret = 0; /* Will call free_more_memory() */
> > + gfp_t gfp_mask;
> >
> > - page = find_or_create_page(inode->i_mapping, index,
> > - (mapping_gfp_mask(inode->i_mapping) & ~__GFP_FS)|__GFP_MOVABLE);
> > + gfp_mask = mapping_gfp_mask(inode->i_mapping) & ~__GFP_FS;
> > + gfp_mask |= __GFP_MOVABLE;
> > + /*
> > + * XXX: __getblk_slow() can not really deal with failure and
> > + * will endlessly loop on improvised global reclaim. Prefer
> > + * looping in the allocator rather than here, at least that
> > + * code knows what it's doing.
> > + */
> > + gfp_mask |= __GFP_NOFAIL;
> > +
> > + page = find_or_create_page(inode->i_mapping, index, gfp_mask);
> > if (!page)
> > return ret;
> >
> > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > index 226b63e..953bf3c 100644
> > --- a/mm/memcontrol.c
> > +++ b/mm/memcontrol.c
> > @@ -2405,6 +2405,8 @@ done:
> > return 0;
> > nomem:
> > *ptr = NULL;
> > + if (gfp_mask & __GFP_NOFAIL)
> > + return 0;
> > return -ENOMEM;
> > bypass:
> > *ptr = root_mem_cgroup;
> > --
> > 1.8.3.2
> >
> --
> To unsubscribe from this list: send the line "unsubscribe stable" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html


\
 
 \ /
  Last update: 2013-10-31 16:01    [W:0.497 / U:0.036 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site