lkml.org 
[lkml]   [2021]   [Feb]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH RFC net-next 1/3] net: page_pool: refactor dma_map into own function page_pool_dma_map
On Wed, Feb 24, 2021 at 07:56:41PM +0100, Jesper Dangaard Brouer wrote:
> In preparation for next patch, move the dma mapping into its own
> function, as this will make it easier to follow the changes.
>
> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
> ---
> net/core/page_pool.c | 49 +++++++++++++++++++++++++++++--------------------
> 1 file changed, 29 insertions(+), 20 deletions(-)
>
> diff --git a/net/core/page_pool.c b/net/core/page_pool.c
> index ad8b0707af04..50d52aa6fbeb 100644
> --- a/net/core/page_pool.c
> +++ b/net/core/page_pool.c
> @@ -180,6 +180,31 @@ static void page_pool_dma_sync_for_device(struct page_pool *pool,
> pool->p.dma_dir);
> }
>
> +static struct page * page_pool_dma_map(struct page_pool *pool,
> + struct page *page)
> +{

Why return a struct page* ?
boolean maybe?

> + dma_addr_t dma;
> +
> + /* Setup DMA mapping: use 'struct page' area for storing DMA-addr
> + * since dma_addr_t can be either 32 or 64 bits and does not always fit
> + * into page private data (i.e 32bit cpu with 64bit DMA caps)
> + * This mapping is kept for lifetime of page, until leaving pool.
> + */
> + dma = dma_map_page_attrs(pool->p.dev, page, 0,
> + (PAGE_SIZE << pool->p.order),
> + pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC);
> + if (dma_mapping_error(pool->p.dev, dma)) {
> + put_page(page);

This is a bit confusing when reading it.
The name of the function should try to map the page and report a yes/no,
instead of trying to call put_page as well.
Can't we explicitly ask the user to call put_page() if the mapping failed?

A clear example is on patch 2/3, when on the first read I was convinced there
was a memory leak.

> + return NULL;
> + }
> + page->dma_addr = dma;
> +
> + if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV)
> + page_pool_dma_sync_for_device(pool, page, pool->p.max_len);
> +
> + return page;
> +}
> +
> /* slow path */
> noinline
> static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool,
> @@ -187,7 +212,6 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool,
> {
> struct page *page;
> gfp_t gfp = _gfp;
> - dma_addr_t dma;
>
> /* We could always set __GFP_COMP, and avoid this branch, as
> * prep_new_page() can handle order-0 with __GFP_COMP.
> @@ -211,27 +235,12 @@ static struct page *__page_pool_alloc_pages_slow(struct page_pool *pool,
> if (!page)
> return NULL;
>
> - if (!(pool->p.flags & PP_FLAG_DMA_MAP))
> - goto skip_dma_map;
> -
> - /* Setup DMA mapping: use 'struct page' area for storing DMA-addr
> - * since dma_addr_t can be either 32 or 64 bits and does not always fit
> - * into page private data (i.e 32bit cpu with 64bit DMA caps)
> - * This mapping is kept for lifetime of page, until leaving pool.
> - */
> - dma = dma_map_page_attrs(pool->p.dev, page, 0,
> - (PAGE_SIZE << pool->p.order),
> - pool->p.dma_dir, DMA_ATTR_SKIP_CPU_SYNC);
> - if (dma_mapping_error(pool->p.dev, dma)) {
> - put_page(page);
> - return NULL;
> + if (pool->p.flags & PP_FLAG_DMA_MAP) {
> + page = page_pool_dma_map(pool, page);
> + if (!page)
> + return NULL;
> }
> - page->dma_addr = dma;
> -
> - if (pool->p.flags & PP_FLAG_DMA_SYNC_DEV)
> - page_pool_dma_sync_for_device(pool, page, pool->p.max_len);
>
> -skip_dma_map:
> /* Track how many pages are held 'in-flight' */
> pool->pages_state_hold_cnt++;
>
>
>

Thanks!
/Ilias

\
 
 \ /
  Last update: 2021-02-24 21:18    [W:0.117 / U:0.068 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site