lkml.org 
[lkml]   [2007]   [Oct]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    SubjectRe: Networked filesystems vs backing_dev_info
    From
    Date
    On Sat, 2007-10-27 at 23:30 +0200, Peter Zijlstra wrote:
    > On Sat, 2007-10-27 at 16:02 -0500, Steve French wrote:
    > > On 10/27/07, Peter Zijlstra <a.p.zijlstra@chello.nl> wrote:
    > > > Hi,
    > > >
    > > > I had me a little look at bdi usage in networked filesystems.
    > > >
    > > > NFS, CIFS, (smbfs), AFS, CODA and NCP
    > > >
    > > > And of those, NFS is the only one that I could find that creates
    > > > backing_dev_info structures. The rest seems to fall back to
    > > > default_backing_dev_info.
    > > >
    > > > With my recent per bdi dirty limit patches the bdi has become more
    > > > important than it has been in the past. While falling back to the
    > > > default_backing_dev_info isn't wrong per-se, it isn't right either.
    > > >
    > > > Could I implore the various maintainers to look into this issue for
    > > > their respective filesystem. I'll try and come up with some patches to
    > > > address this, but feel free to beat me to it.
    > >
    > > I would like to understand more about your patches to see what bdi
    > > values makes sense for CIFS and how to report possible congestion back
    > > to the page manager.
    >
    > So, what my recent patches do is carve up the total writeback cache
    > size, or dirty page limit as we call it, proportionally to a BDIs
    > writeout speed. So a fast device gets more than a slow device, but will
    > not starve it.
    >
    > However, for this to work, each device, or remote backing store in the
    > case of networked filesystems, need to have a BDI.
    >
    > > I had been thinking about setting bdi->ra_pages
    > > so that we do more sensible readahead and writebehind - better
    > > matching what is possible over the network and what the server
    > > prefers.
    >
    > Well, you'd first have to create backing_dev_info instances before
    > setting that value :-)
    >
    > > SMB/CIFS Servers typically allow a maximum of 50 requests
    > > in parallel at one time from one client (although this is adjustable
    > > for some).
    >
    > That seems like a perfect point to set congestion.
    >
    > So in short, stick a struct backing_dev_info into whatever represents a
    > client, initialize it using bdi_init(), destroy using bdi_destroy().

    Oh, and the most important point, make your fresh I_NEW inodes point to
    this bdi struct.

    > Mark it congested once you have 50 (or more) outstanding requests, clear
    > congestion when you drop below 50.
    >
    > and you should be set.
    >

    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

    \
     
     \ /
      Last update: 2007-10-27 23:39    [W:0.024 / U:0.776 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site