lkml.org 
[lkml]   [2007]   [Oct]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] remove throttle_vm_writeout()
On Fri, 05 Oct 2007 02:12:30 +0200 Miklos Szeredi <miklos@szeredi.hu> wrote:

> >
> > I don't think I understand that. Sure, it _shouldn't_ be a problem. But it
> > _is_. That's what we're trying to fix, isn't it?
>
> The problem, I believe is in the memory allocation code, not in fuse.

fuse is trying to do something which page reclaim was not designed for.
Stuff broke.

> In the example, memory allocation may be blocking indefinitely,
> because we have 4MB under writeback, even though 28MB can still be
> made available. And that _should_ be fixable.

Well yes. But we need to work out how, without re-breaking the thing which
throttle_vm_writeout() fixed.

> > > So the only thing the kernel should be careful about, is not to block
> > > on an allocation if not strictly necessary.
> > >
> > > Actually a trivial fix for this problem could be to just tweak the
> > > thresholds, so to make the above scenario impossible. Although I'm
> > > still not convinced, this patch is perfect, because the dirty
> > > threshold can actually change in time...
> > >
> > > Index: linux/mm/page-writeback.c
> > > ===================================================================
> > > --- linux.orig/mm/page-writeback.c 2007-10-05 00:31:01.000000000 +0200
> > > +++ linux/mm/page-writeback.c 2007-10-05 00:50:11.000000000 +0200
> > > @@ -515,6 +515,12 @@ void throttle_vm_writeout(gfp_t gfp_mask
> > > for ( ; ; ) {
> > > get_dirty_limits(&background_thresh, &dirty_thresh, NULL, NULL);
> > >
> > > + /*
> > > + * Make sure the theshold is over the hard limit of
> > > + * dirty_thresh + ratelimit_pages * nr_cpus
> > > + */
> > > + dirty_thresh += ratelimit_pages * num_online_cpus();
> > > +
> > > /*
> > > * Boost the allowable dirty threshold a bit for page
> > > * allocators so they don't get DoS'ed by heavy writers
> >
> > I can probably kind of guess what you're trying to do here. But if
> > ratelimit_pages * num_online_cpus() exceeds the size of the offending zone
> > then things might go bad.
>
> I think the admin can do quite a bit of other damage, by setting
> dirty_ratio too high.
>
> Maybe this writeback throttling should just have a fixed limit of 80%
> ZONE_NORMAL, and limit dirty_ratio to something like 50%.

Bear in mind that the same problem will occur for the 16MB ZONE_DMA, and
we cannot limit the system-wide dirty-memory threshold to 12MB.

iow, throttle_vm_writeout() needs to become zone-aware. Then it only
throttles when, say, 80% of ZONE_FOO is under writeback.

Except I don't think that'll fix the problem 100%: if your fuse kernel
component somehow manages to put 80% of ZONE_FOO under writeback (and
remmeber this might be only 12MB on a 16GB machine) then we get stuck again
- the fuse server process (is that the correct terminology, btw?) ends up
waiting upon itself.

I'll think about it a bit.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2007-10-05 02:53    [W:0.117 / U:0.052 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site