lkml.org 
[lkml]   [2012]   [Mar]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    Date
    From
    Subject[ 03/38] aio: fix the "too late munmap()" race
    3.0-stable review patch.  If anyone has any objections, please let me know.

    ------------------
    From: Al Viro <viro@ZenIV.linux.org.uk>

    commit c7b285550544c22bc005ec20978472c9ac7138c6 upstream.

    Current code has put_ioctx() called asynchronously from aio_fput_routine();
    that's done *after* we have killed the request that used to pin ioctx,
    so there's nothing to stop io_destroy() waiting in wait_for_all_aios()
    from progressing. As the result, we can end up with async call of
    put_ioctx() being the last one and possibly happening during exit_mmap()
    or elf_core_dump(), neither of which expects stray munmap() being done
    to them...

    We do need to prevent _freeing_ ioctx until aio_fput_routine() is done
    with that, but that's all we care about - neither io_destroy() nor
    exit_aio() will progress past wait_for_all_aios() until aio_fput_routine()
    does really_put_req(), so the ioctx teardown won't be done until then
    and we don't care about the contents of ioctx past that point.

    Since actual freeing of these suckers is RCU-delayed, we don't need to
    bump ioctx refcount when request goes into list for async removal.
    All we need is rcu_read_lock held just over the ->ctx_lock-protected
    area in aio_fput_routine().

    Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
    Reviewed-by: Jeff Moyer <jmoyer@redhat.com>
    Acked-by: Benjamin LaHaise <bcrl@kvack.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

    ---
    fs/aio.c | 14 ++++++--------
    1 file changed, 6 insertions(+), 8 deletions(-)
    --- a/fs/aio.c
    +++ b/fs/aio.c
    @@ -228,12 +228,6 @@ static void __put_ioctx(struct kioctx *c
    call_rcu(&ctx->rcu_head, ctx_rcu_free);
    }

    -static inline void get_ioctx(struct kioctx *kioctx)
    -{
    - BUG_ON(atomic_read(&kioctx->users) <= 0);
    - atomic_inc(&kioctx->users);
    -}
    -
    static inline int try_get_ioctx(struct kioctx *kioctx)
    {
    return atomic_inc_not_zero(&kioctx->users);
    @@ -527,11 +521,16 @@ static void aio_fput_routine(struct work
    fput(req->ki_filp);

    /* Link the iocb into the context's free list */
    + rcu_read_lock();
    spin_lock_irq(&ctx->ctx_lock);
    really_put_req(ctx, req);
    + /*
    + * at that point ctx might've been killed, but actual
    + * freeing is RCU'd
    + */
    spin_unlock_irq(&ctx->ctx_lock);
    + rcu_read_unlock();

    - put_ioctx(ctx);
    spin_lock_irq(&fput_lock);
    }
    spin_unlock_irq(&fput_lock);
    @@ -562,7 +561,6 @@ static int __aio_put_req(struct kioctx *
    * this function will be executed w/out any aio kthread wakeup.
    */
    if (unlikely(!fput_atomic(req->ki_filp))) {
    - get_ioctx(ctx);
    spin_lock(&fput_lock);
    list_add(&req->ki_list, &fput_head);
    spin_unlock(&fput_lock);



    \
     
     \ /
      Last update: 2012-03-17 00:59    [from the cache]
    ©2003-2011 Jasper Spaans