lkml.org 
[lkml]   [2010]   [Oct]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: Deadlock possibly caused by too_many_isolated.
On Tue, 19 Oct 2010 09:31:42 +1100
Neil Brown <neilb@suse.de> wrote:

> On Mon, 18 Oct 2010 14:58:59 -0700
> Andrew Morton <akpm@linux-foundation.org> wrote:
>
> > On Tue, 19 Oct 2010 00:15:04 +0800
> > Wu Fengguang <fengguang.wu@intel.com> wrote:
> >
> > > Neil find that if too_many_isolated() returns true while performing
> > > direct reclaim we can end up waiting for other threads to complete their
> > > direct reclaim. If those threads are allowed to enter the FS or IO to
> > > free memory, but this thread is not, then it is possible that those
> > > threads will be waiting on this thread and so we get a circular
> > > deadlock.
> > >
> > > some task enters direct reclaim with GFP_KERNEL
> > > => too_many_isolated() false
> > > => vmscan and run into dirty pages
> > > => pageout()
> > > => take some FS lock
> > > => fs/block code does GFP_NOIO allocation
> > > => enter direct reclaim again
> > > => too_many_isolated() true
> > > => waiting for others to progress, however the other
> > > tasks may be circular waiting for the FS lock..

I'm assuming that the last four "=>"'s here should have been indented
another stop.

> > > The fix is to let !__GFP_IO and !__GFP_FS direct reclaims enjoy higher
> > > priority than normal ones, by honouring them higher throttle threshold.
> > >
> > > Now !GFP_IOFS reclaims won't be waiting for GFP_IOFS reclaims to
> > > progress. They will be blocked only when there are too many concurrent
> > > !GFP_IOFS reclaims, however that's very unlikely because the IO-less
> > > direct reclaims is able to progress much more faster, and they won't
> > > deadlock each other. The threshold is raised high enough for them, so
> > > that there can be sufficient parallel progress of !GFP_IOFS reclaims.
> >
> > I'm not sure that this is really a full fix. Torsten's analysis does
> > appear to point at the real bug: raid1 has code paths which allocate
> > more than a single element from a mempool without starting IO against
> > previous elements.
>
> ... point at "a" real bug.
>
> I think there are two bugs here.
> The raid1 bug that Torsten mentions is certainly real (and has been around
> for an embarrassingly long time).
> The bug that I identified in too_many_isolated is also a real bug and can be
> triggered without md/raid1 in the mix.
> So this is not a 'full fix' for every bug in the kernel :-), but it could
> well be a full fix for this particular bug.
>

Can we just delete the too_many_isolated() logic? (Crappy comment
describes what the code does but not why it does it).



\
 
 \ /
  Last update: 2010-10-19 00:45    [W:0.074 / U:0.332 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site