lkml.org 
[lkml]   [2011]   [May]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
Subject[slubllv4 01/16] slub: Per object NUMA support
Currently slub applies NUMA policies per allocated slab page. Change
that to apply memory policies for each individual object allocated.

F.e. before this patch MPOL_INTERLEAVE would return objects from the
same slab page until a new slab page was allocated. Now an object
from a different page is taken for each allocation.

This increases the overhead of the fastpath under NUMA.

Signed-off-by: Christoph Lameter <cl@linux.com>

---
mm/slub.c | 17 +++++++++++++++++
1 file changed, 17 insertions(+)

Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c 2011-05-05 15:21:51.000000000 -0500
+++ linux-2.6/mm/slub.c 2011-05-05 15:28:33.000000000 -0500
@@ -1873,6 +1873,21 @@ debug:
goto unlock_out;
}

+static __always_inline int alternate_slab_node(struct kmem_cache *s,
+ gfp_t flags, int node)
+{
+#ifdef CONFIG_NUMA
+ if (unlikely(node == NUMA_NO_NODE &&
+ !(flags & __GFP_THISNODE) &&
+ !in_interrupt())) {
+ if ((s->flags & SLAB_MEM_SPREAD) && cpuset_do_slab_mem_spread())
+ node = cpuset_slab_spread_node();
+ else if (current->mempolicy)
+ node = slab_node(current->mempolicy);
+ }
+#endif
+ return node;
+}
/*
* Inlined fastpath so that allocation functions (kmalloc, kmem_cache_alloc)
* have the fastpath folded into their functions. So no function call
@@ -1893,6 +1908,8 @@ static __always_inline void *slab_alloc(
if (slab_pre_alloc_hook(s, gfpflags))
return NULL;

+ node = alternate_slab_node(s, gfpflags, node);
+
redo:

/*


\
 
 \ /
  Last update: 2011-05-06 20:13    [W:0.113 / U:0.044 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site