lkml.org 
[lkml]   [2009]   [Jun]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: merging the per-bdi writeback patchset
    On Tue, 23 Jun 2009 10:55:05 +0200 Jens Axboe <jens.axboe@oracle.com> wrote:

    > On Tue, Jun 23 2009, Andrew Morton wrote:
    > > On Tue, 23 Jun 2009 10:11:56 +0200 Jens Axboe <jens.axboe@oracle.com> wrote:
    > >
    > > > Things are looking good for this patchset and it's been in -next for
    > > > almost a week without any reports of problems. So I'd like to merge it
    > > > for 2.6.31 if at all possible. Any objections?
    > >
    > > erk. I was rather expecting I'd have time to have a look at it all.
    >
    > OK, we can wait if we have to, just trying to avoid having to keep this
    > fresh for one full cycle. I have posted this patchset 11 times though
    > over months, so it's not like it's a new piece of work :-)

    Yeah, sorry.

    > > It's unclear to me actually _why_ the performance changes which were
    > > observed have actually occurred. In fact it's a bit unclear (to me)
    > > why the patchset was written and what it sets out to achieve :(
    >
    > It started out trying to get rid of the pdflush uneven writeout. If you
    > look at various pdflush intensive workloads, even on a single disk you
    > often have 5 or more pdflush threads working the same device. It's just
    > not optimal.

    That's a bug, isn't it? This

    /* Is another pdflush already flushing this queue? */
    if (current_is_pdflush() && !writeback_acquire(bdi))
    break;

    isn't working.

    > Another issue was starvation with request allocation. Given
    > that pdflush does non-blocking writes (it has to, by design), pdflush
    > can potentially be starved if someone else is working the device.

    hm, true. 100% starved, or just "slowed down"? The latter I trust -
    otherwise there are still failure modes?

    > > A long time ago the XFS guys (Dave Chinner iirc) said that XFS needs
    > > more than one thread per device to keep the device saturated. Did that
    > > get addressed?
    >
    > It supports up to 32-threads per device, but Chinner et all have been
    > silent. So the support is there and there's a
    > super_operations->inode_get_wb() to map a dirty inode to a writeback
    > device. Nobody is doing that yet though.

    OK.

    How many kernel threads do the 1000-spindle people end up with?


    \
     
     \ /
      Last update: 2009-06-23 17:05    [W:3.779 / U:0.008 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site