lkml.org 
[lkml]   [2021]   [Oct]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 5.10 097/103] mqprio: Correct stats in mqprio_dump_class_stats().
    Date
    From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

    commit 14132690860e4d06aa3e1c4d7d8e9866ba7756dd upstream.

    Introduction of lockless subqueues broke the class statistics.
    Before the change stats were accumulated in `bstats' and `qstats'
    on the stack which was then copied to struct gnet_dump.

    After the change the `bstats' and `qstats' are initialized to 0
    and never updated, yet still fed to gnet_dump. The code updates
    the global qdisc->cpu_bstats and qdisc->cpu_qstats instead,
    clobbering them. Most likely a copy-paste error from the code in
    mqprio_dump().

    __gnet_stats_copy_basic() and __gnet_stats_copy_queue() accumulate
    the values for per-CPU case but for global stats they overwrite
    the value, so only stats from the last loop iteration / tc end up
    in sch->[bq]stats.

    Use the on-stack [bq]stats variables again and add the stats manually
    in the global case.

    Fixes: ce679e8df7ed2 ("net: sched: add support for TCQ_F_NOLOCK subqueues to sch_mqprio")
    Cc: John Fastabend <john.fastabend@gmail.com>
    Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
    https://lore.kernel.org/all/20211007175000.2334713-2-bigeasy@linutronix.de/
    Signed-off-by: Jakub Kicinski <kuba@kernel.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
    ---
    net/sched/sch_mqprio.c | 30 ++++++++++++++++++------------
    1 file changed, 18 insertions(+), 12 deletions(-)

    --- a/net/sched/sch_mqprio.c
    +++ b/net/sched/sch_mqprio.c
    @@ -529,22 +529,28 @@ static int mqprio_dump_class_stats(struc
    for (i = tc.offset; i < tc.offset + tc.count; i++) {
    struct netdev_queue *q = netdev_get_tx_queue(dev, i);
    struct Qdisc *qdisc = rtnl_dereference(q->qdisc);
    - struct gnet_stats_basic_cpu __percpu *cpu_bstats = NULL;
    - struct gnet_stats_queue __percpu *cpu_qstats = NULL;

    spin_lock_bh(qdisc_lock(qdisc));
    +
    if (qdisc_is_percpu_stats(qdisc)) {
    - cpu_bstats = qdisc->cpu_bstats;
    - cpu_qstats = qdisc->cpu_qstats;
    - }
    + qlen = qdisc_qlen_sum(qdisc);

    - qlen = qdisc_qlen_sum(qdisc);
    - __gnet_stats_copy_basic(NULL, &sch->bstats,
    - cpu_bstats, &qdisc->bstats);
    - __gnet_stats_copy_queue(&sch->qstats,
    - cpu_qstats,
    - &qdisc->qstats,
    - qlen);
    + __gnet_stats_copy_basic(NULL, &bstats,
    + qdisc->cpu_bstats,
    + &qdisc->bstats);
    + __gnet_stats_copy_queue(&qstats,
    + qdisc->cpu_qstats,
    + &qdisc->qstats,
    + qlen);
    + } else {
    + qlen += qdisc->q.qlen;
    + bstats.bytes += qdisc->bstats.bytes;
    + bstats.packets += qdisc->bstats.packets;
    + qstats.backlog += qdisc->qstats.backlog;
    + qstats.drops += qdisc->qstats.drops;
    + qstats.requeues += qdisc->qstats.requeues;
    + qstats.overlimits += qdisc->qstats.overlimits;
    + }
    spin_unlock_bh(qdisc_lock(qdisc));
    }


    \
     
     \ /
      Last update: 2021-10-18 15:47    [W:4.152 / U:0.088 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site