lkml.org 
[lkml]   [2021]   [Jan]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
Subject[PATCH net-next 0/5] skbuff: introduce skbuff_heads bulking and reusing
Inspired by cpu_map_kthread_run() and _kfree_skb_defer() logics.

Currently, all sorts of skb allocation always do allocate
skbuff_heads one by one via kmem_cache_alloc().
On the other hand, we have percpu napi_alloc_cache to store
skbuff_heads queued up for freeing and flush them by bulks.

We can use this struct to cache and bulk not only freeing, but also
allocation of new skbuff_heads, as well as to reuse cached-to-free
heads instead of allocating the new ones.
As accessing napi_alloc_cache implies NAPI softirq context, do this
only for __napi_alloc_skb() and its derivatives (napi_alloc_skb()
and napi_get_frags()). The rough amount of their call sites are 69,
which is quite a number.

iperf3 showed a nice bump from 910 to 935 Mbits while performing
UDP VLAN NAT on 1.2 GHz MIPS board. The boost is likely to be
way bigger on more powerful hosts and NICs with tens of Mpps.

Patches 1-2 are preparation steps, while 3-5 do the real work.

Alexander Lobakin (5):
skbuff: rename fields of struct napi_alloc_cache to be more intuitive
skbuff: open-code __build_skb() inside __napi_alloc_skb()
skbuff: reuse skbuff_heads from flush_skb_cache if available
skbuff: allocate skbuff_heads by bulks instead of one by one
skbuff: refill skb_cache early from deferred-to-consume entries

net/core/skbuff.c | 62 ++++++++++++++++++++++++++++++++++++-----------
1 file changed, 48 insertions(+), 14 deletions(-)

--
2.30.0


\
 
 \ /
  Last update: 2021-01-11 19:28    [W:0.119 / U:0.500 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site