lkml.org 
[lkml]   [2010]   [Nov]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 00/17] [RFC] soft and dynamic dirty throttling limits
On Thu, Nov 04, 2010 at 11:41:19AM +0800, Wu Fengguang wrote:
> Hi Dave,
>
> On Mon, Nov 01, 2010 at 02:24:46PM +0800, Dave Chinner wrote:
> > On Wed, Oct 13, 2010 at 08:26:27PM +1100, Dave Chinner wrote:
> > > On Wed, Oct 13, 2010 at 04:26:12PM +0800, Wu Fengguang wrote:
> > > > On Wed, Oct 13, 2010 at 11:07:33AM +0800, Dave Chinner wrote:
> > > > > On Tue, Oct 12, 2010 at 10:17:16AM -0400, Christoph Hellwig wrote:
> > > > > > Wu, what's the state of this series? It looks like we'll need it
> > > > > > rather sooner than later - try to get at least the preparations in
> > > > > > ASAP would be really helpful.
> > > > >
> > > > > Not ready in it's current form. This load (creating millions of 1
> > > > > byte files in parallel):
> > > > >
> > > > > $ /usr/bin/time ./fs_mark -D 10000 -S0 -n 100000 -s 1 -L 63 \
> > > > > > -d /mnt/scratch/0 -d /mnt/scratch/1 \
> > > > > > -d /mnt/scratch/2 -d /mnt/scratch/3 \
> > > > > > -d /mnt/scratch/4 -d /mnt/scratch/5 \
> > > > > > -d /mnt/scratch/6 -d /mnt/scratch/7
> > > > >
> > > > > Locks up all the fs_mark processes spinning in traces like the
> > > > > following and no further progress is made when the inode cache
> > > > > fills memory.
> > > >
> > > > I reproduced the problem on a 6G/8p 2-socket 11-disk box.
> > > >
> > > > The root cause is, pageout() is somehow called with low scan priority,
> > > > which deserves more investigation.
> > > >
> > > > The direct cause is, balance_dirty_pages() then keeps nr_dirty too low,
> > > > which can be improved easily by not pushing down the soft dirty limit
> > > > to less than 1-second worth of dirty pages.
> > > >
> > > > My test box has two nodes, and their memory usage are rather unbalanced:
> > > > (Dave, maybe you have NUMA setup too?)
> > >
> > > No, I'm running the test in a single node VM.
> > >
> > > FYI, I'm running the test on XFS (16TB 12 disk RAID0 stripe), using
> > > the mount options "inode64,nobarrier,logbsize=262144,delaylog".
> >
> > Any update on the current status of this patchset?
>
> The last 3 patches to dynamically lower the 20% dirty limit seem
> to hurt writeback throughput when it goes too small. That's not
> surprising. I tried moderately increase the low bound of dynamic
> dirty limit but tests show that it's still not enough. Days ago I
> came up with another low bound scheme, however the test box has
> been running LKP (and other) benchmarks for the new -rc1 release..
>
> Anyway I see some tricky points in deciding the low bound for dynamic
> dirty limit. It seems reasonable to bypass this feature for now, and
> to test/submit the other important parts first.
>
> I'm feeling relatively good about the first 14 patches to do IO-less
> balance_dirty_pages() and larger writeback chunk size. I'll repost
> them separately as v2 after returning to Shanghai.

As I've pointed out already, increasing the writeback chunk size is
not a good idea to do, so I'd suggest that it should be separated
from the IO-less balance_dirty_pages() series.

> Some days ago I prepared some slides which has some figures on the old
> and new dirty throttling schemes. Hope it helps.
>
> http://www.kernel.org/pub/linux/kernel/people/wfg/writeback/dirty-throttling.pdf

Pretty colours, but doesn't really add much to what I already
understood from your series description. I guess it loses something
without someone talking about them.... :/

Cheers,

Dave.
--
Dave Chinner
david@fromorbit.com


\
 
 \ /
  Last update: 2010-11-04 13:53    [W:0.130 / U:0.756 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site