lkml.org 
[lkml]   [2024]   [Apr]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH net-next v9 7/9] libeth: add Rx buffer management
From
On 4/4/24 17:44, Alexander Lobakin wrote:
> Add a couple intuitive helpers to hide Rx buffer implementation details
> in the library and not multiplicate it between drivers. The settings are
> sorta optimized for 100G+ NICs, but nothing really HW-specific here.
> Use the new page_pool_dev_alloc() to dynamically switch between
> split-page and full-page modes depending on MTU, page size, required
> headroom etc. For example, on x86_64 with the default driver settings
> each page is shared between 2 buffers. Turning on XDP (not in this
> series) -> increasing headroom requirement pushes truesize out of 2048
> boundary, leading to that each buffer starts getting a full page.
> The "ceiling" limit is %PAGE_SIZE, as only order-0 pages are used to
> avoid compound overhead. For the above architecture, this means maximum
> linear frame size of 3712 w/o XDP.
> Not that &libeth_buf_queue is not a complete queue/ring structure for
> now, rather a shim, but eventually the libeth-enabled drivers will move
> to it, with iavf being the first one.
>
> Signed-off-by: Alexander Lobakin <aleksander.lobakin@intel.com>
> ---
> drivers/net/ethernet/intel/libeth/Kconfig | 1 +
> include/net/libeth/rx.h | 117 ++++++++++++++++++++++
> drivers/net/ethernet/intel/libeth/rx.c | 98 ++++++++++++++++++
> 3 files changed, 216 insertions(+)
>
[...]

> +/**
> + * struct libeth_fqe - structure representing an Rx buffer
> + * @page: page holding the buffer
> + * @offset: offset from the page start (to the headroom)
> + * @truesize: total space occupied by the buffer (w/ headroom and tailroom)
> + *
> + * Depending on the MTU, API switches between one-page-per-frame and shared
> + * page model (to conserve memory on bigger-page platforms). In case of the
> + * former, @offset is always 0 and @truesize is always ```PAGE_SIZE```.
> + */
> +struct libeth_fqe {
> + struct page *page;
> + u32 offset;
> + u32 truesize;
> +} __aligned_largest;
> +
> +/**
> + * struct libeth_fq - structure representing a buffer queue
> + * @fp: hotpath part of the structure
> + * @pp: &page_pool for buffer management
> + * @fqes: array of Rx buffers
> + * @truesize: size to allocate per buffer, w/overhead
> + * @count: number of descriptors/buffers the queue has
> + * @buf_len: HW-writeable length per each buffer
> + * @nid: ID of the closest NUMA node with memory
> + */
> +struct libeth_fq {
> + struct_group_tagged(libeth_fq_fp, fp,
> + struct page_pool *pp;
> + struct libeth_fqe *fqes;
> +
> + u32 truesize;
> + u32 count;
> + );
> +
> + /* Cold fields */
> + u32 buf_len;
> + int nid;
> +};

[...]

Could you please unpack the meaning of `fq` and `fqe` acronyms here?

otherwise the whole series is very good for me, thank you very much!


\
 
 \ /
  Last update: 2024-05-27 16:26    [W:0.129 / U:2.788 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site