lkml.org 
[lkml]   [2019]   [Dec]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [net-next v4 PATCH] page_pool: handle page recycle for NUMA_NO_NODE condition
On Thu, 19 Dec 2019 13:09:25 +0100
Michal Hocko <mhocko@kernel.org> wrote:

> On Wed 18-12-19 09:01:35, Jesper Dangaard Brouer wrote:
> [...]
> > For the NUMA_NO_NODE case, when a NIC IRQ is moved to another NUMA
> > node, then ptr_ring will be emptied in 65 (PP_ALLOC_CACHE_REFILL+1)
> > chunks per allocation and allocation fall-through to the real
> > page-allocator with the new nid derived from numa_mem_id(). We accept
> > that transitioning the alloc cache doesn't happen immediately.

Oh, I just realized that the drivers usually refill several RX
packet-pages at once, this means that this is called N times, meaning
during a NUMA change this will result in N * 65 pages returned.


> Could you explain what is the expected semantic of NUMA_NO_NODE in this
> case? Does it imply always the preferred locality? See my other email[1] to
> this matter.

I do think we want NUMA_NO_NODE to mean preferred locality. My code
allow the page to come from a remote NUMA node, but once it is returned
via the ptr_ring, we return pages not belonging to the local NUMA node
(determined by the CPU processing RX packets from the drivers RX-ring).


> [1] http://lkml.kernel.org/r/20191219115338.GC26945@dhcp22.suse.cz

--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer

\
 
 \ /
  Last update: 2019-12-19 14:37    [W:0.085 / U:0.864 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site