lkml.org 
[lkml]   [2019]   [Dec]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH][v2] page_pool: handle page recycle for NUMA_NO_NODE condition
On Mon 16-12-19 14:34:26, Ilias Apalodimas wrote:
> Hi Michal,
> On Mon, Dec 16, 2019 at 01:15:57PM +0100, Michal Hocko wrote:
> > On Thu 12-12-19 09:34:14, Yunsheng Lin wrote:
> > > +CC Michal, Peter, Greg and Bjorn
> > > Because there has been disscusion about where and how the NUMA_NO_NODE
> > > should be handled before.
> >
> > I do not have a full context. What is the question here?
>
> When we allocate pages for the page_pool API, during the init, the driver writer
> decides which NUMA node to use. The API can, in some cases recycle the memory,
> instead of freeing it and re-allocating it. If the NUMA node has changed (irq
> affinity for example), we forbid recycling and free the memory, since recycling
> and using memory on far NUMA nodes is more expensive (more expensive than
> recycling, at least on the architectures we tried anyway).
> Since this would be expensive to do it per packet, the burden falls on the
> driver writer for that. Drivers *have* to call page_pool_update_nid() or
> page_pool_nid_changed() if they want to check for that which runs once
> per NAPI cycle.

Thanks for the clarification.

> The current code in the API though does not account for NUMA_NO_NODE. That's
> what this is trying to fix.
> If the page_pool params are initialized with that, we *never* recycle
> the memory. This is happening because the API is allocating memory with
> 'nid = numa_mem_id()' if NUMA_NO_NODE is configured so the current if statement
> 'page_to_nid(page) == pool->p.nid' will never trigger.

OK. There is no explicit mention of the expected behavior for
NUMA_NO_NODE. The semantic is usually that there is no NUMA placement
requirement and the MM code simply starts the allocate from a local node
in that case. But the memory might come from any node so there is no
"local node" guarantee.

So the main question is what is the expected semantic? Do people expect
that NUMA_NO_NODE implies locality? Why don't you simply always reuse
when there was no explicit numa requirement?

> The initial proposal was to check:
> pool->p.nid == NUMA_NO_NODE && page_to_nid(page) == numa_mem_id()));

> After that the thread span out of control :)
> My question is do we *really* have to check for
> page_to_nid(page) == numa_mem_id()? if the architecture is not NUMA aware
> wouldn't pool->p.nid == NUMA_NO_NODE be enough?

If the architecture is !NUMA then numa_mem_id and page_to_nid should
always equal and be both zero.

--
Michal Hocko
SUSE Labs

\
 
 \ /
  Last update: 2019-12-16 14:09    [W:0.066 / U:0.224 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site