lkml.org 
[lkml]   [2011]   [Aug]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH 4/5] writeback: per task dirty rate limit
    On Wed, Aug 10, 2011 at 06:25:48PM +0800, Peter Zijlstra wrote:
    > On Wed, 2011-08-10 at 11:40 +0800, Wu Fengguang wrote:
    > > On Wed, Aug 10, 2011 at 02:35:06AM +0800, Peter Zijlstra wrote:
    > > > On Sat, 2011-08-06 at 16:44 +0800, Wu Fengguang wrote:
    > > > >
    > > > > Add two fields to task_struct.
    > > > >
    > > > > 1) account dirtied pages in the individual tasks, for accuracy
    > > > > 2) per-task balance_dirty_pages() call intervals, for flexibility
    > > > >
    > > > > The balance_dirty_pages() call interval (ie. nr_dirtied_pause) will
    > > > > scale near-sqrt to the safety gap between dirty pages and threshold.
    > > > >
    > > > > XXX: The main problem of per-task nr_dirtied is, if 10k tasks start
    > > > > dirtying pages at exactly the same time, each task will be assigned a
    > > > > large initial nr_dirtied_pause, so that the dirty threshold will be
    > > > > exceeded long before each task reached its nr_dirtied_pause and hence
    > > > > call balance_dirty_pages().
    > > >
    > > > Right, so why remove the per-cpu threshold? you can keep that as a bound
    > > > on the number of out-standing dirty pages.
    > >
    > > Right, I also have the vague feeling that the per-cpu threshold can
    > > somehow backup the per-task threshold in case there are too many tasks.
    > >
    > > > Loosing that bound is actually a bad thing (TM), since you could have
    > > > configured a tight dirty limit and lock up your machine this way.
    > >
    > > It seems good enough to only remove the 4MB upper limit for
    > > ratelimit_pages, so that the per-cpu limit won't kick in too
    > > frequently in typical machines.
    > >
    > > * Here we set ratelimit_pages to a level which ensures that when all CPUs are
    > > * dirtying in parallel, we cannot go more than 3% (1/32) over the dirty memory
    > > * thresholds before writeback cuts in.
    > > - *
    > > - * But the limit should not be set too high. Because it also controls the
    > > - * amount of memory which the balance_dirty_pages() caller has to write back.
    > > - * If this is too large then the caller will block on the IO queue all the
    > > - * time. So limit it to four megabytes - the balance_dirty_pages() caller
    > > - * will write six megabyte chunks, max.
    > > - */
    > > -
    > > void writeback_set_ratelimit(void)
    > > {
    > > ratelimit_pages = vm_total_pages / (num_online_cpus() * 32);
    > > if (ratelimit_pages < 16)
    > > ratelimit_pages = 16;
    > > - if (ratelimit_pages * PAGE_CACHE_SIZE > 4096 * 1024)
    > > - ratelimit_pages = (4096 * 1024) / PAGE_CACHE_SIZE;
    > > }
    >
    > Uhm, so what's your bound then? 1/32 of the per-cpu memory seems rather
    > a lot.

    Ah yes, vm_total_pages is not longer suitable here, may use

    ratelimit_pages = dirty_threshold / (num_online_cpus() * 32);

    We just need to ensure the dirty_threshold won't be exceeded too much
    in the rare case tsk->nr_dirtied_pause cannot keep dirty pages under
    control when there are >10k dirtier tasks.

    Thanks,
    Fengguang


    \
     
     \ /
      Last update: 2011-08-10 13:15    [W:2.946 / U:0.884 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site