lkml.org 
[lkml]   [2020]   [May]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    From
    Subject[PATCHSET v4 0/12] Add support for async buffered reads
    Date
    We technically support this already through io_uring, but it's
    implemented with a thread backend to support cases where we would
    block. This isn't ideal.

    After a few prep patches, the core of this patchset is adding support
    for async callbacks on page unlock. With this primitive, we can simply
    retry the IO operation. With io_uring, this works a lot like poll based
    retry for files that support it. If a page is currently locked and
    needed, -EIOCBQUEUED is returned with a callback armed. The callers
    callback is responsible for restarting the operation.

    With this callback primitive, we can add support for
    generic_file_buffered_read(), which is what most file systems end up
    using for buffered reads. XFS/ext4/btrfs/bdev is wired up, but probably
    trivial to add more.

    The file flags support for this by setting FMODE_BUF_RASYNC, similar
    to what we do for FMODE_NOWAIT. Open to suggestions here if this is
    the preferred method or not.

    In terms of results, I wrote a small test app that randomly reads 4G
    of data in 4K chunks from a file hosted by ext4. The app uses a queue
    depth of 32. If you want to test yourself, you can just use buffered=1
    with ioengine=io_uring with fio. No application changes are needed to
    use the more optimized buffered async read.

    preadv for comparison:
    real 1m13.821s
    user 0m0.558s
    sys 0m11.125s
    CPU ~13%

    Mainline:
    real 0m12.054s
    user 0m0.111s
    sys 0m5.659s
    CPU ~32% + ~50% == ~82%

    This patchset:
    real 0m9.283s
    user 0m0.147s
    sys 0m4.619s
    CPU ~52%

    The CPU numbers are just a rough estimate. For the mainline io_uring
    run, this includes the app itself and all the threads doing IO on its
    behalf (32% for the app, ~1.6% per worker and 32 of them). Context
    switch rate is much smaller with the patchset, since we only have the
    one task performing IO.

    Also ran a simple fio based test case, varying the queue depth from 1
    to 16, doubling every time:

    [buf-test]
    filename=/data/file
    direct=0
    ioengine=io_uring
    norandommap
    rw=randread
    bs=4k
    iodepth=${QD}
    randseed=89
    runtime=10s

    QD/Test Patchset IOPS Mainline IOPS
    1 9046 8294
    2 19.8k 18.9k
    4 39.2k 28.5k
    8 64.4k 31.4k
    16 65.7k 37.8k

    Outside of my usual environment, so this is just running on a virtualized
    NVMe device in qemu, using ext4 as the file system. NVMe isn't very
    efficient virtualized, so we run out of steam at ~65K which is why we
    flatline on the patched side (nvme_submit_cmd() eats ~75% of the test app
    CPU). Before that happens, it's a linear increase. Not shown is context
    switch rate, which is massively lower with the new code. The old thread
    offload adds a blocking thread per pending IO, so context rate quickly
    goes through the roof.

    The goal here is efficiency. Async thread offload adds latency, and
    it also adds noticable overhead on items such as adding pages to the
    page cache. By allowing proper async buffered read support, we don't
    have X threads hammering on the same inode page cache, we have just
    the single app actually doing IO.

    Been beating on this and it's solid for me, and I'm now pretty happy
    with how it all turned out. Not aware of any missing bits/pieces or
    code cleanups that need doing.

    Series can also be found here:

    https://git.kernel.dk/cgit/linux-block/log/?h=async-buffered.4

    or pull from:

    git://git.kernel.dk/linux-block async-buffered.4

    fs/block_dev.c | 2 +-
    fs/btrfs/file.c | 2 +-
    fs/ext4/file.c | 2 +-
    fs/io_uring.c | 114 ++++++++++++++++++++++++++++++++++++++
    fs/xfs/xfs_file.c | 2 +-
    include/linux/blk_types.h | 3 +-
    include/linux/fs.h | 10 +++-
    include/linux/pagemap.h | 67 ++++++++++++++++++++++
    mm/filemap.c | 111 ++++++++++++++++++++++++-------------
    9 files changed, 267 insertions(+), 46 deletions(-)

    Changes since v3:
    - io_uring: don't retry if REQ_F_NOWAIT is set
    - io_uring: alloc req->io if the request type didn't already
    - Add iocb->ki_waitq instead of (ab)using iocb->private
    Changes since v2:
    - Get rid of unnecessary wait_page_async struct, just use wait_page_async
    - Add another prep handler, adding wake_page_match()
    - Use wake_page_match() in both callers
    Changes since v1:
    - Fix an issue with inline page locking
    - Fix a potential race with __wait_on_page_locked_async()
    - Fix a hang related to not setting page_match, thus missing a wakeup

    --
    Jens Axboe


    \
     
     \ /
      Last update: 2020-05-24 21:22    [W:3.605 / U:0.972 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site