lkml.org 
[lkml]   [2019]   [Aug]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 4.9 049/103] vhost_net: use packet weight for rx handler, too
    Date
    From: Paolo Abeni <pabeni@redhat.com>

    commit db688c24eada63b1efe6d0d7d835e5c3bdd71fd3 upstream.

    Similar to commit a2ac99905f1e ("vhost-net: set packet weight of
    tx polling to 2 * vq size"), we need a packet-based limit for
    handler_rx, too - elsewhere, under rx flood with small packets,
    tx can be delayed for a very long time, even without busypolling.

    The pkt limit applied to handle_rx must be the same applied by
    handle_tx, or we will get unfair scheduling between rx and tx.
    Tying such limit to the queue length makes it less effective for
    large queue length values and can introduce large process
    scheduler latencies, so a constant valued is used - likewise
    the existing bytes limit.

    The selected limit has been validated with PVP[1] performance
    test with different queue sizes:

    queue size 256 512 1024

    baseline 366 354 362
    weight 128 715 723 670
    weight 256 740 745 733
    weight 512 600 460 583
    weight 1024 423 427 418

    A packet weight of 256 gives peek performances in under all the
    tested scenarios.

    No measurable regression in unidirectional performance tests has
    been detected.

    [1] https://developers.redhat.com/blog/2017/06/05/measuring-and-comparing-open-vswitch-performance/

    Signed-off-by: Paolo Abeni <pabeni@redhat.com>
    Acked-by: Jason Wang <jasowang@redhat.com>
    Signed-off-by: David S. Miller <davem@davemloft.net>
    Signed-off-by: Ben Hutchings <ben.hutchings@codethink.co.uk>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    ---
    drivers/vhost/net.c | 12 ++++++++----
    1 file changed, 8 insertions(+), 4 deletions(-)

    --- a/drivers/vhost/net.c
    +++ b/drivers/vhost/net.c
    @@ -40,8 +40,10 @@ MODULE_PARM_DESC(experimental_zcopytx, "
    #define VHOST_NET_WEIGHT 0x80000

    /* Max number of packets transferred before requeueing the job.
    - * Using this limit prevents one virtqueue from starving rx. */
    -#define VHOST_NET_PKT_WEIGHT(vq) ((vq)->num * 2)
    + * Using this limit prevents one virtqueue from starving others with small
    + * pkts.
    + */
    +#define VHOST_NET_PKT_WEIGHT 256

    /* MAX number of TX used buffers for outstanding zerocopy */
    #define VHOST_MAX_PEND 128
    @@ -480,7 +482,7 @@ static void handle_tx(struct vhost_net *
    total_len += len;
    vhost_net_tx_packet(net);
    if (unlikely(total_len >= VHOST_NET_WEIGHT) ||
    - unlikely(++sent_pkts >= VHOST_NET_PKT_WEIGHT(vq))) {
    + unlikely(++sent_pkts >= VHOST_NET_PKT_WEIGHT)) {
    vhost_poll_queue(&vq->poll);
    break;
    }
    @@ -662,6 +664,7 @@ static void handle_rx(struct vhost_net *
    struct socket *sock;
    struct iov_iter fixup;
    __virtio16 num_buffers;
    + int recv_pkts = 0;

    mutex_lock_nested(&vq->mutex, 0);
    sock = vq->private_data;
    @@ -760,7 +763,8 @@ static void handle_rx(struct vhost_net *
    vhost_log_write(vq, vq_log, log, vhost_len,
    vq->iov, in);
    total_len += vhost_len;
    - if (unlikely(total_len >= VHOST_NET_WEIGHT)) {
    + if (unlikely(total_len >= VHOST_NET_WEIGHT) ||
    + unlikely(++recv_pkts >= VHOST_NET_PKT_WEIGHT)) {
    vhost_poll_queue(&vq->poll);
    goto out;
    }

    \
     
     \ /
      Last update: 2019-08-22 19:56    [W:2.926 / U:0.188 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site