lkml.org 
[lkml]   [2021]   [Jan]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH v2 net-next 2/3] skbuff: (re)use NAPI skb cache on allocation path
On Wed, Jan 13, 2021 at 2:37 PM Alexander Lobakin <alobakin@pm.me> wrote:
>
> Instead of calling kmem_cache_alloc() every time when building a NAPI
> skb, (re)use skbuff_heads from napi_alloc_cache.skb_cache. Previously
> this cache was only used for bulk-freeing skbuff_heads consumed via
> napi_consume_skb() or __kfree_skb_defer().
>
> Typical path is:
> - skb is queued for freeing from driver or stack, its skbuff_head
> goes into the cache instead of immediate freeing;
> - driver or stack requests NAPI skb allocation, an skbuff_head is
> taken from the cache instead of allocation.
>
> Corner cases:
> - if it's empty on skb allocation, bulk-allocate the first half;
> - if it's full on skb consuming, bulk-wipe the second half.
>
> Also try to balance its size after completing network softirqs
> (__kfree_skb_flush()).

I do not see the point of doing this rebalance (especially if we do not change
its name describing its purpose more accurately).

For moderate load, we will have a reduced bulk size (typically one or two).
Number of skbs in the cache is in [0, 64[ , there is really no risk of
letting skbs there for a long period of time.
(32 * sizeof(sk_buff) = 8192)
I would personally get rid of this function completely.


Also it seems you missed my KASAN support request ?
I guess this is a matter of using kasan_unpoison_range(), we can ask for help.




>
> prefetchw() on CONFIG_SLUB is dropped since it makes no sense anymore.
>
> Suggested-by: Edward Cree <ecree.xilinx@gmail.com>
> Signed-off-by: Alexander Lobakin <alobakin@pm.me>
> ---
> net/core/skbuff.c | 54 ++++++++++++++++++++++++++++++-----------------
> 1 file changed, 35 insertions(+), 19 deletions(-)
>
> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> index dc3300dc2ac4..f42a3a04b918 100644
> --- a/net/core/skbuff.c
> +++ b/net/core/skbuff.c
> @@ -364,6 +364,7 @@ struct sk_buff *build_skb_around(struct sk_buff *skb,
> EXPORT_SYMBOL(build_skb_around);
>
> #define NAPI_SKB_CACHE_SIZE 64
> +#define NAPI_SKB_CACHE_HALF (NAPI_SKB_CACHE_SIZE / 2)
>
> struct napi_alloc_cache {
> struct page_frag_cache page;
> @@ -487,7 +488,15 @@ EXPORT_SYMBOL(__netdev_alloc_skb);
>
> static struct sk_buff *napi_skb_cache_get(struct napi_alloc_cache *nc)
> {
> - return kmem_cache_alloc(skbuff_head_cache, GFP_ATOMIC);
> + if (unlikely(!nc->skb_count))
> + nc->skb_count = kmem_cache_alloc_bulk(skbuff_head_cache,
> + GFP_ATOMIC,
> + NAPI_SKB_CACHE_HALF,
> + nc->skb_cache);
> + if (unlikely(!nc->skb_count))
> + return NULL;
> +
> + return nc->skb_cache[--nc->skb_count];
> }
>
> /**
> @@ -867,40 +876,47 @@ void __consume_stateless_skb(struct sk_buff *skb)
> void __kfree_skb_flush(void)
> {
> struct napi_alloc_cache *nc = this_cpu_ptr(&napi_alloc_cache);
> + size_t count;
> + void **ptr;
> +
> + if (unlikely(nc->skb_count == NAPI_SKB_CACHE_HALF))
> + return;
> +
> + if (nc->skb_count > NAPI_SKB_CACHE_HALF) {
> + count = nc->skb_count - NAPI_SKB_CACHE_HALF;
> + ptr = nc->skb_cache + NAPI_SKB_CACHE_HALF;
>
> - /* flush skb_cache if containing objects */
> - if (nc->skb_count) {
> - kmem_cache_free_bulk(skbuff_head_cache, nc->skb_count,
> - nc->skb_cache);
> - nc->skb_count = 0;
> + kmem_cache_free_bulk(skbuff_head_cache, count, ptr);
> + nc->skb_count = NAPI_SKB_CACHE_HALF;
> + } else {
> + count = NAPI_SKB_CACHE_HALF - nc->skb_count;
> + ptr = nc->skb_cache + nc->skb_count;
> +
> + nc->skb_count += kmem_cache_alloc_bulk(skbuff_head_cache,
> + GFP_ATOMIC, count,
> + ptr);
> }
> }
>

\
 
 \ /
  Last update: 2021-01-13 15:37    [W:0.081 / U:0.060 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site