lkml.org 
[lkml]   [2021]   [Mar]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH RFC] net: sched: implement TCQ_F_CAN_BYPASS for lockless qdisc
    Date
    Currently pfifo_fast has both TCQ_F_CAN_BYPASS and TCQ_F_NOLOCK
    flag set, but queue discipline by-pass does not work for lockless
    qdisc because skb is always enqueued to qdisc even when the qdisc
    is empty, see __dev_xmit_skb().

    This patch calles sch_direct_xmit() to transmit the skb directly
    to the driver for empty lockless qdisc too, which aviod enqueuing
    and dequeuing operation. qdisc->empty is set to false whenever a
    skb is enqueued, and is set to true when skb dequeuing return NULL,
    see pfifo_fast_dequeue().

    Also, qdisc is scheduled at the end of qdisc_run_end() when q->empty
    is false to avoid packet stuck problem.

    The performance for ip_forward test increases about 10% with this
    patch.

    Signed-off-by: Yunsheng Lin <linyunsheng@huawei.com>
    ---
    include/net/sch_generic.h | 7 +++++--
    net/core/dev.c | 11 +++++++++++
    2 files changed, 16 insertions(+), 2 deletions(-)

    diff --git a/include/net/sch_generic.h b/include/net/sch_generic.h
    index 2d6eb60..6591356 100644
    --- a/include/net/sch_generic.h
    +++ b/include/net/sch_generic.h
    @@ -161,7 +161,6 @@ static inline bool qdisc_run_begin(struct Qdisc *qdisc)
    if (qdisc->flags & TCQ_F_NOLOCK) {
    if (!spin_trylock(&qdisc->seqlock))
    return false;
    - WRITE_ONCE(qdisc->empty, false);
    } else if (qdisc_is_running(qdisc)) {
    return false;
    }
    @@ -176,8 +175,12 @@ static inline bool qdisc_run_begin(struct Qdisc *qdisc)
    static inline void qdisc_run_end(struct Qdisc *qdisc)
    {
    write_seqcount_end(&qdisc->running);
    - if (qdisc->flags & TCQ_F_NOLOCK)
    + if (qdisc->flags & TCQ_F_NOLOCK) {
    spin_unlock(&qdisc->seqlock);
    +
    + if (unlikely(!READ_ONCE(qdisc->empty)))
    + __netif_schedule(qdisc);
    + }
    }

    static inline bool qdisc_may_bulk(const struct Qdisc *qdisc)
    diff --git a/net/core/dev.c b/net/core/dev.c
    index 2bfdd52..fa8504d 100644
    --- a/net/core/dev.c
    +++ b/net/core/dev.c
    @@ -3791,7 +3791,18 @@ static inline int __dev_xmit_skb(struct sk_buff *skb, struct Qdisc *q,
    qdisc_calculate_pkt_len(skb, q);

    if (q->flags & TCQ_F_NOLOCK) {
    + if (q->flags & TCQ_F_CAN_BYPASS && READ_ONCE(q->empty) && qdisc_run_begin(q)) {
    + qdisc_bstats_cpu_update(q, skb);
    +
    + if (sch_direct_xmit(skb, q, dev, txq, NULL, true) && !READ_ONCE(q->empty))
    + __qdisc_run(q);
    +
    + qdisc_run_end(q);
    + return NET_XMIT_SUCCESS;
    + }
    +
    rc = q->enqueue(skb, q, &to_free) & NET_XMIT_MASK;
    + WRITE_ONCE(q->empty, false);
    qdisc_run(q);

    if (unlikely(to_free))
    --
    2.7.4
    \
     
     \ /
      Last update: 2021-03-13 03:49    [W:2.276 / U:0.288 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site