[lkml]   [2011]   [Nov]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
    SubjectRe: [PATCH] ext4: fix racy use-after-free in ext4_end_io_dio()
    Heh. It took me about 2 seconds to trigger it in vm :)

    One reason it triggered so fast is that my VM test setup runs
    everything out of ram (the disks on the host are files in a tmpfs),
    but the main reason we were hitting it is that bcache usually runs the
    bio->bi_endio function out of a workqueue, not irq context.

    It also seems to only trigger when a dio write is extending a file;
    the same test setup run against an existing file doesn't ever cause
    (visible) slab corruption.

    Do you think this would also explain the corruption D is seeing in vd?
    I haven't yet figured out a mechanism but the bug seems to fit.

    On Thu, Nov 24, 2011 at 3:18 PM, Ted Ts'o <> wrote:
    > On Thu, Nov 24, 2011 at 11:46:26AM -0800, Tejun Heo wrote:
    >> ext4_end_io_dio() queues io_end->work and then clears iocb->private;
    >> however, io_end->work completes the iocb by calling aio_complete(),
    >> which may happen before io_end->work clearing thus leading to
    >> use-after-free.
    >> Detected and tested with slab poisoning.
    >> Signed-off-by: Tejun Heo <>
    >> Reported-by: Kent Overstreet <>
    >> Tested-by: Kent Overstreet <>
    >> Cc:
    > Thanks!!  I've been trying to track down this bug for a while.  The
    > repro case I had ran the 12 fio's against 12 different file systems
    > with the following configuration:
    > [global]
    > direct=1
    > ioengine=libaio
    > iodepth=1
    > bs=4k
    > ba=4k
    > size=128m
    > [create]
    > filename=${TESTDIR}
    > rw=write
    > ... and would leave a few inodes with elevated i_ioend_counts, which
    > means any attempt to delete those inodes or to unmount the file system
    > owning those inodes would hang forever.
    > With your patch this problem goes away.
    >>I *think* this is the correct fix but am not too familiar with code
    >>path, so please proceed with caution.
    > Looks good to me.  Thanks, applied.
    >>Thank you.
    > No, thank *you*!  :-)
    >                                        - Ted
    > P.S.  It would be nice to get this into xfstests, but it requires at
    > least 10-12 (12 to repro it reliably) HDD's, and a fairly high core
    > count machine in order to reproduce it.  I played around with trying
    > to create a reproducer that worked on a smaller number of disks and/or
    > fio's/CPU's, but I was never able to manage it.
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to
    More majordomo info at
    Please read the FAQ at

     \ /
      Last update: 2011-11-25 00:55    [W:0.026 / U:0.044 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site