lkml.org 
[lkml]   [2010]   [Aug]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    SubjectRe: [RFC][PATCH] Per file dirty limit throttling
    From
    Date
    On Wed, 2010-08-18 at 14:52 +0530, Nikanth Karthikesan wrote:
    > On Tuesday 17 August 2010 13:54:35 Peter Zijlstra wrote:
    > > On Tue, 2010-08-17 at 10:39 +0530, Nikanth Karthikesan wrote:
    > > > Oh, nice. Per-task limit is an elegant solution, which should help
    > > > during most of the common cases.
    > > >
    > > > But I just wonder what happens, when
    > > > 1. The dirtier is multiple co-operating processes
    > > > 2. Some app like a shell script, that repeatedly calls dd with seek and
    > > > skip? People do this for data deduplication, sparse skipping etc..
    > > > 3. The app dies and comes back again. Like a VM that is rebooted, and
    > > > continues writing to a disk backed by a file on the host.
    > > >
    > > > Do you think, in those cases this might still be useful?
    > >
    > > Those cases do indeed defeat the current per-task-limit, however I think
    > > the solution to that is to limit the amount of writeback done by each
    > > blocked process.
    > >
    >
    > Blocked on what? Sorry, I do not understand.

    balance_dirty_pages(), by limiting the work done there (or actually, the
    amount of page writeback completions you wait for -- starting IO isn't
    that expensive), you can also affect the time it takes, and therefore
    influence the impact.




    \
     
     \ /
      Last update: 2010-08-18 12:01    [W:0.021 / U:1.756 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site