Messages in this thread Patch in this message | ![/](/images/icornerl.gif) | | Date | Tue, 21 Mar 2023 20:12:29 +0800 (CST) | From | <> | Subject | [PATCH] rps: process the skb directly if rps cpu not changed |
| |
From: xu xin <xu.xin16@zte.com.cn>
In the RPS procedure of NAPI receiving, regardless of whether the rps-calculated CPU of the skb equals to the currently processing CPU, RPS will always use enqueue_to_backlog to enqueue the skb to per-cpu backlog, which will trigger a new NET_RX softirq.
Actually, it's not necessary to enqueue it to backlog when rps-calculated CPU id equals to the current processing CPU, and we can call __netif_receive_skb or __netif_receive_skb_list to process the skb directly. The benefit is that it can reduce the number of softirqs of NET_RX and reduce the processing delay of skb.
The measured result shows the patch brings 50% reduction of NET_RX softirqs. The test was done on the QEMU environment with two-core CPU by iperf3. taskset 01 iperf3 -c 192.168.2.250 -t 3 -u -R; taskset 02 iperf3 -c 192.168.2.250 -t 3 -u -R;
Previous RPS: CPU0 CPU1 NET_RX: 45 0 (before iperf3 testing) NET_RX: 1095 241 (after iperf3 testing)
Patched RPS: CPU0 CPU1 NET_RX: 28 4 (before iperf3 testing) NET_RX: 573 32 (after iperf3 testing)
Signed-off-by: xu xin <xu.xin16@zte.com.cn> Reviewed-by: Zhang Yunkai <zhang.yunkai@zte.com.cn> Reviewed-by: Yang Yang <yang.yang29@zte.com.cn> Cc: Xuexin Jiang <jiang.xuexin@zte.com.cn> --- net/core/dev.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/net/core/dev.c b/net/core/dev.c index c7853192563d..c33ddac3c012 100644 --- a/net/core/dev.c +++ b/net/core/dev.c @@ -5666,8 +5666,9 @@ static int netif_receive_skb_internal(struct sk_buff *skb) if (static_branch_unlikely(&rps_needed)) { struct rps_dev_flow voidflow, *rflow = &voidflow; int cpu = get_rps_cpu(skb->dev, skb, &rflow); + int current_cpu = smp_processor_id();
- if (cpu >= 0) { + if (cpu >= 0 && cpu != current_cpu) { ret = enqueue_to_backlog(skb, cpu, &rflow->last_qtail); rcu_read_unlock(); return ret; @@ -5699,8 +5700,9 @@ void netif_receive_skb_list_internal(struct list_head *head) list_for_each_entry_safe(skb, next, head, list) { struct rps_dev_flow voidflow, *rflow = &voidflow; int cpu = get_rps_cpu(skb->dev, skb, &rflow); + int current_cpu = smp_processor_id();
- if (cpu >= 0) { + if (cpu >= 0 && cpu != current_cpu) { /* Will be handled, remove from list */ skb_list_del_init(skb); enqueue_to_backlog(skb, cpu, &rflow->last_qtail); -- 2.15.2
| ![\](/images/icornerr.gif) |