lkml.org 
[lkml]   [2024]   [May]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH v3 0/3] block,nvme: latency-based I/O scheduler
    On Thu, May 09, 2024 at 04:43:21PM -0400, John Meneghini wrote:
    > I'm re-issuing Hannes's latency patches in preparation for LSFMM

    Hello John,

    Just a small note.

    Please don't reply-to the previous version of the series (v2), when sending
    out a v3.

    It creates "an unmanageable forest of references in email clients".

    See:
    https://www.kernel.org/doc/html/latest/process/submitting-patches.html#explicit-in-reply-to-headers

    Instead just add the url to the v2 on lore.kernel.org.

    See you at LSFMM!


    Kind regards,
    Niklas

    >
    > Changes since V2:
    >
    > I've done quite a bit of work cleaning up these patches. There were a
    > number of checkpatch.pl problems as well as some compile time errors
    > when config BLK_NODE_LATENCY was turned off. After the clean up I
    > rebased these patches onto Ewan's "nvme: queue-depth multipath iopolicy"
    > patches. This allowed me to test both iopolicy changes together.
    >
    > All of my test results, together with the scripts I used to generate these
    > graphs, are available at:
    >
    > https://github.com/johnmeneghini/iopolicy
    >
    > Please use the scripts in this repository to do your own testing.
    >
    > Changes since V1:
    >
    > Hi all,
    >
    > there had been several attempts to implement a latency-based I/O
    > scheduler for native nvme multipath, all of which had its issues.
    >
    > So time to start afresh, this time using the QoS framework
    > already present in the block layer.
    > It consists of two parts:
    > - a new 'blk-nlatency' QoS module, which is just a simple per-node
    > latency tracker
    > - a 'latency' nvme I/O policy
    >
    > Using the 'tiobench' fio script with 512 byte blocksize I'm getting
    > the following latencies (in usecs) as a baseline:
    > - seq write: avg 186 stddev 331
    > - rand write: avg 4598 stddev 7903
    > - seq read: avg 149 stddev 65
    > - rand read: avg 150 stddev 68
    >
    > Enabling the 'latency' iopolicy:
    > - seq write: avg 178 stddev 113
    > - rand write: avg 3427 stddev 6703
    > - seq read: avg 140 stddev 59
    > - rand read: avg 141 stddev 58
    >
    > Setting the 'decay' parameter to 10:
    > - seq write: avg 182 stddev 65
    > - rand write: avg 2619 stddev 5894
    > - seq read: avg 142 stddev 57
    > - rand read: avg 140 stddev 57
    >
    > That's on a 32G FC testbed running against a brd target,
    > fio running with 48 threads. So promises are met: latency
    > goes down, and we're even able to control the standard
    > deviation via the 'decay' parameter.
    >
    > As usual, comments and reviews are welcome.
    >
    > Changes to the original version:
    > - split the rqos debugfs entries
    > - Modify commit message to indicate latency
    > - rename to blk-nlatency
    >
    > Hannes Reinecke (2):
    > block: track per-node I/O latency
    > nvme: add 'latency' iopolicy
    >
    > John Meneghini (1):
    > nvme: multipath: pr_notice when iopolicy changes
    >
    > MAINTAINERS | 1 +
    > block/Kconfig | 9 +
    > block/Makefile | 1 +
    > block/blk-mq-debugfs.c | 2 +
    > block/blk-nlatency.c | 389 ++++++++++++++++++++++++++++++++++
    > block/blk-rq-qos.h | 6 +
    > drivers/nvme/host/multipath.c | 73 ++++++-
    > drivers/nvme/host/nvme.h | 1 +
    > include/linux/blk-mq.h | 11 +
    > 9 files changed, 484 insertions(+), 9 deletions(-)
    > create mode 100644 block/blk-nlatency.c
    >
    > --
    > 2.39.3
    >
    >

    \
     
     \ /
      Last update: 2024-05-27 18:23    [W:6.105 / U:1.568 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site