[lkml]   [2007]   [Mar]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
    SubjectRe: [patch 3/8] per backing_dev dirty and writeback page accounting
    On Mon, Mar 12, 2007 at 12:40:47PM +0100, Miklos Szeredi wrote:
    > > > I have no idea how serious the scalability problems with this are. If
    > > > they are serious, different solutions can probably be found for the
    > > > above, but this is certainly the simplest.
    > >
    > > Atomic operations to a single per-backing device from all CPUs at once?
    > > That's a pretty serious scalability issue and it will cause a major
    > > performance regression for XFS.
    > OK. How about just accounting writeback pages? That should be much
    > less of a problem, since normally writeback is started from
    > pdflush/kupdate in large batches without any concurrency.

    Except when you are throttling you bounce the cacheline around
    each cpu as it triggers foreground writeback.....

    > Or is it possible to export the state of the device queue to mm?
    > E.g. could balance_dirty_pages() query the backing dev if there are
    > any outstanding write requests?

    Not directly - writeback_in_progress(bdi) is a coarse measure
    indicating pdflush is active on this bdi, which implies outstanding
    write requests).

    > > I'd call this a showstopper right now - maybe you need to look at
    > > something like the ZVC code that Christoph Lameter wrote, perhaps?
    > That's rather a heavyweight approach for this I think.

    But if you want to use per-page accounting, you are going to
    need a per-cpu or per-zone set of counters on each bdi to do
    this without introducing regressions.

    > The only info balance_dirty_pages() really needs is whether there are
    > any dirty+writeback bound for the backing dev or not.

    writeback bound (i.e. writing as fast as we can) is probably
    indicated fairly reliably by bdi_congested(bdi).

    Now all you need is the number of dirty pages....

    > It knows about the diry pages, since it calls writeback_inodes() which
    > scans the dirty pages for this backing dev looking for ones to write
    > out.

    It scans the dirty inode list for dirty inodes which indirectly finds
    the dirty pages. It does not know about the number of dirty pages

    > If after returning from writeback_inodes() wbc->nr_to_write
    > didn't decrease and wbc->pages_skipped is zero then we know that there
    > are no more dirty pages for the device. Or at least there are no
    > dirty pages which aren't already under writeback.

    Sure, you can tell if there are _no_ dirty pages on the bdi, but
    if there are dirty pages, you can't tell how many there are. Your
    followup patches need to know how many dirty+writeback pages there
    are on the bdi, so I don't really see any way you can solve the
    deadlock in this manner without scalable bdi->nr_dirty accounting.


    IIUC, your problem is that there's another bdi that holds all the
    dirty pages, and this throttle loop never flushes pages from that
    other bdi and we sleep instead. It seems to me that the fundamental
    problem is that to clean the pages we need to flush both bdi's, not
    just the bdi we are directly dirtying.

    How about a "dependent bdi" link? i.e. if you have a loopback
    filesystem, it has a direct bdi (the loopback device) and a
    dependent bdi - the bdi that belongs to the underlying filesystem.

    When we enter the throttle loop we flush from the direct bdi
    and if we fail to flush all the pages we require, we flush
    the dependent bdi (maybe even just kick pdflush for that bdi)
    before we call congestion_wait() and go to sleep. This way
    we are always making progress cleaning pages on the machine,
    not just transferring dirty pages form one bdi to another.

    Wouldn't that solve the deadlock without needing painful


    Dave Chinner
    Principal Engineer
    SGI Australian Software Group
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to
    More majordomo info at
    Please read the FAQ at

     \ /
      Last update: 2007-03-12 22:47    [W:0.024 / U:11.240 seconds]
    ©2003-2017 Jasper Spaans. hosted at Digital OceanAdvertise on this site