lkml.org 
[lkml]   [2009]   [Apr]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH 1/9] io-throttle documentation
    On Tue, Apr 21, 2009 at 02:29:58PM -0400, Vivek Goyal wrote:
    > On Tue, Apr 21, 2009 at 10:23:05AM -0400, Vivek Goyal wrote:
    > > On Tue, Apr 21, 2009 at 10:37:03AM +0200, Andrea Righi wrote:
    > > > On Mon, Apr 20, 2009 at 09:08:46PM -0400, Vivek Goyal wrote:
    > > > > On Tue, Apr 21, 2009 at 12:05:12AM +0200, Andrea Righi wrote:
    > > > >
    > > > > [..]
    > > > > > > > > Are we not already controlling submission of request (at crude level).
    > > > > > > > > If application is doing writeout at high rate, then it hits vm_dirty_ratio
    > > > > > > > > hits and this application is forced to do write out and hence it is slowed
    > > > > > > > > down and is not allowed to submit writes at high rate.
    > > > > > > > >
    > > > > > > > > Just that it is not a very fair scheme right now as during right out
    > > > > > > > > a high prio/high weight cgroup application can start writing out some
    > > > > > > > > other cgroups' pages.
    > > > > > > > >
    > > > > > > > > For this we probably need to have some combination of solutions like
    > > > > > > > > per cgroup upper limit on dirty pages. Secondly probably if an application
    > > > > > > > > is slowed down because of hitting vm_drity_ratio, it should try to
    > > > > > > > > write out the inode it is dirtying first instead of picking any random
    > > > > > > > > inode and associated pages. This will ensure that a high weight
    > > > > > > > > application can quickly get through the write outs and see higher
    > > > > > > > > throughput from the disk.
    > > > > > > >
    > > > > > > > For the first, I submitted a patchset some months ago to provide this
    > > > > > > > feature in the memory controller:
    > > > > > > >
    > > > > > > > https://lists.linux-foundation.org/pipermail/containers/2008-September/013140.html
    > > > > > > >
    > > > > > > > We focused on the best interface to use for setting the dirty pages
    > > > > > > > limit, but we didn't finalize it. I can rework on that and repost an
    > > > > > > > updated version. Now that we have the dirty_ratio/dirty_bytes to set the
    > > > > > > > global limit I think we can use the same interface and the same semantic
    > > > > > > > within the cgroup fs, something like:
    > > > > > > >
    > > > > > > > memory.dirty_ratio
    > > > > > > > memory.dirty_bytes
    > > > > > > >
    > > > > > > > For the second point something like this should be enough to force tasks
    > > > > > > > to write out only the inode they're actually dirtying when they hit the
    > > > > > > > vm_dirty_ratio limit. But it should be tested carefully and may cause
    > > > > > > > heavy performance regressions.
    > > > > > > >
    > > > > > > > Signed-off-by: Andrea Righi <righi.andrea@gmail.com>
    > > > > > > > ---
    > > > > > > > mm/page-writeback.c | 2 +-
    > > > > > > > 1 files changed, 1 insertions(+), 1 deletions(-)
    > > > > > > >
    > > > > > > > diff --git a/mm/page-writeback.c b/mm/page-writeback.c
    > > > > > > > index 2630937..1e07c9d 100644
    > > > > > > > --- a/mm/page-writeback.c
    > > > > > > > +++ b/mm/page-writeback.c
    > > > > > > > @@ -543,7 +543,7 @@ static void balance_dirty_pages(struct address_space *mapping)
    > > > > > > > * been flushed to permanent storage.
    > > > > > > > */
    > > > > > > > if (bdi_nr_reclaimable) {
    > > > > > > > - writeback_inodes(&wbc);
    > > > > > > > + sync_inode(mapping->host, &wbc);
    > > > > > > > pages_written += write_chunk - wbc.nr_to_write;
    > > > > > > > get_dirty_limits(&background_thresh, &dirty_thresh,
    > > > > > > > &bdi_thresh, bdi);
    > > > > > >
    > > > > > > This patch seems to be helping me a bit in getting more service
    > > > > > > differentiation between two writer dd of different weights. But strangely
    > > > > > > it is helping only for ext3 and not ext4. Debugging is on.
    > > > > >
    > > > > > Are you explicitly mounting ext3 with data=ordered?
    > > > >
    > > > > Yes. Still using 29-rc8 and data=ordered was the default then.
    > > > >
    > > > > I got two partitions on same disk and created one ext3 filesystem on each
    > > > > partition (just to take journaling intereference out of two dd threads
    > > > > for the time being).
    > > > >
    > > > > Two dd threads doing writes to each partition.
    > > >
    > > > ...and if you're using data=writeback with ext4 sync_inode() should sync
    > > > the metadata only. If this is the case, could you check data=ordered
    > > > also for ext4?
    > >
    > > No, even data=ordered mode with ext4 is also not helping. It has to be
    > > something else.
    > >
    >
    > Ok, with data=ordered mode with ext4, now I can get significant service
    > differentiation between two dd processes. I had to tweak cfq a bit.
    >
    > - Instead of 40ms slice for async queue, do 20ms at a time (tunable).
    > - change cfq quantum to 1 from 4 to not dispatch a bunch of requests at
    > one go.
    >
    > Above changes help a bit in making sure two continuously backlogged queues
    > at IO scheduler so that IO scheduler can offer more disk time to higher
    > weight process.

    Good, also testing the WB_SYNC_ALL would be interesting I think.

    -Andrea


    \
     
     \ /
      Last update: 2009-04-21 23:39    [W:2.587 / U:0.008 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site