lkml.org 
[lkml]   [2010]   [Jun]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH 11/12] vmscan: Write out dirty pages in batch
    On Tue, 15 Jun 2010 00:08:14 -0400 Rik van Riel <riel@redhat.com> wrote:

    > On 06/14/2010 09:45 PM, Andrew Morton wrote:
    > > On Mon, 14 Jun 2010 21:16:29 -0400 Rik van Riel<riel@redhat.com> wrote:
    > >
    > >> Would it be hard to add a "please flush this file"
    > >> way to call the filesystem flushing threads?
    > >
    > > Passing the igrab()bed inode into the flusher threads would fix the
    > > iput_final() problems, as long as the alloc_pages() caller never blocks
    > > indefinitely waiting for the work which the flusher threads are doing.
    > >
    > > Otherwise we get (very hard-to-hit) deadlocks where the alloc_pages()
    > > caller holds VFS locks and is waiting for the flusher threads while all
    > > the flusher threads are stuck under iput_final() waiting for those VFS
    > > locks.
    > >
    > > That's fixable by not using igrab()/iput(). You can use lock_page() to
    > > pin the address_space. Pass the address of the locked page across to
    > > the flusher threads so they don't try to lock it a second time, or just
    > > use trylocking on that writeback path or whatever.
    >
    > Any thread that does not have __GFP_FS set in its gfp_mask
    > cannot wait for the flusher to complete. This is regardless
    > of the mechanism used to kick the flusher.

    mm... kinda. A bare order-zero __GFP_WAIT allocation can still wait
    forever, afaict.

    > Then again, those threads cannot call ->writepage today
    > either, so we should be fine keeping that behaviour.

    I'm not sure. iput_final() can take a lot of locks, both VFS and
    heaven knows what within the individual filesystems. Is it the case
    that all allocations which occur under all of those locks is always
    !__GFP_FS? Hard to say...

    > Threads that do have __GFP_FS in their gfp_mask can wait
    > for the flusher in various ways. Maybe the lock_page()
    > method can be simplified by having the flusher thread
    > unlock the page the moment it gets it, and then run the
    > normal flusher code?

    Well, _something_ has to pin the address_space. A single locked page
    will do.

    > The pageout code (in shrink_page_list) already unlocks
    > the page anyway before putting it back on the relevant
    > LRU list. It would be easy enough to skip that unlock
    > and let the flusher thread take care of it.

    Once that page is unlocked, we can't touch *mapping - its inode can be
    concurrently reclaimed. Although I guess the technique in
    handle_write_error() can be reused.



    \
     
     \ /
      Last update: 2010-06-15 06:41    [W:0.046 / U:89.288 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site