lkml.org 
[lkml]   [2011]   [May]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 6/6] writeback: refill b_io iff empty
On Tue, May 10, 2011 at 12:31:04PM +0800, Wu Fengguang wrote:
> On Fri, May 06, 2011 at 10:21:55PM +0800, Jan Kara wrote:
> > On Fri 06-05-11 13:29:55, Wu Fengguang wrote:
> > > On Fri, May 06, 2011 at 12:37:08AM +0800, Jan Kara wrote:
> > > > On Wed 04-05-11 15:39:31, Wu Fengguang wrote:
> > > > > To help understand the behavior change, I wrote the writeback_queue_io
> > > > > trace event, and found very different patterns between
> > > > > - vanilla kernel
> > > > > - this patchset plus the sync livelock fixes
> > > > >
> > > > > Basically the vanilla kernel each time pulls a random number of inodes
> > > > > from b_dirty, while the patched kernel tends to pull a fixed number of
> > > > > inodes (enqueue=1031) from b_dirty. The new behavior is very interesting...
> > > > This regularity is really strange. Did you have a chance to look more into
> > > > it? I find it highly unlikely that there would be exactly 1031 dirty inodes
> > > > in b_dirty list every time you call move_expired_inodes()...
> > >
> > > Jan, I got some results for ext4. The total dd+tar+sync time is
> > > decreased from 177s to 167s. The other numbers are either raised or
> > > dropped.
> > Nice, but what I was more curious about was to understand why you saw
> > enqueued=1031 all the time.
>
> Maybe some unknown interactions with XFS? Attached is another trace
> with both writeback_single_inode and writeback_queue_io.

Perhaps because write throttling is limiting the number of files
being dirtied to match the number of files being cleaned? hence they
age at roughly the same rate as writeback is cleaning them?
Especially as most file are only a single page in size?

Or perhaps that is the rate at which IO completions are occurring
and updating the inode size and redirtying the inode? After all,
there are lots of inodes that are only state=I_DIRTY_SYNC and
wrote=0 in the traces around when it starts going to ~1000 inodes
per queue_io call....

Or maybe a combination of both?

Cheers,

Dave.
--
Dave Chinner
david@fromorbit.com


\
 
 \ /
  Last update: 2011-05-10 06:57    [W:0.123 / U:0.100 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site