lkml.org 
[lkml]   [2003]   [Nov]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
Subjectavailable memory imbalance on large NUMA systems
Summary:
--------
We wish to implement node round-robin memory allocation for certain large
kernel hash tables allocated during kernel startup on NUMA systems.
We are interested in getting a community-accepted solution in to the
2.6 kernel.

Background:
-----------
NUMA systems are made of multiple nodes connected together by a fast
interconnect to make one large system. Each node has it's own set of
processors and memory. There is a notion of memory that is close to a
node (perhaps memory on the node itself) and memory further away (perhaps
located on a different node separated by a router).

When the kernel starts up, certain hash tables are allocated. The routines
that allocate these hashes don't know about NUMA systems. They see a large
amount of memory on the system and allocate a chunk of it sometimes based on
the size of overall memory available on the system.

The end result is that the first node in the system is hit harder in terms
of memory usage than other nodes. On a very large system (32 or more nodes
with 4g of memory per node for example), the first node in the system can have
less than half of its total memory available.

This imbalance is not desirable for folks wishing to run large computational
jobs that depend on memory being available on all nodes. For example,
certain large MPI programs may be negatively impacted if they expect to be
able to get equal amounts of memory from all nodes.

Example Fix:
------------
To fix this problem, we propose implementing a round-robin memory allocation
scheme. We have included an example implementation as a patch to 2.4.21
(attached). In it, we create a new function in vmalloc.h named
alloc_big_struct. It is based on vmalloc (so the resulting memory does go
through the page table). This is function can be used to allocate certain
kernel hashes such as the page table or the dentry table in place of
__get_free_pages().

Now, I understand that this patch would not be accepted by the community how
it stands right now. So think of the patch as an example to illustrate my
point rather than a polished proposal. In fact, this patch may not cleanly
apply to kernel.org 2.4 as-is. The example makes heavy use of vmalloc for
NUMA systems and I understand (from yesterday :) that this isn't necessarily
desirable. I think the example patch does still illustrate what we're trying
to do.

I guess I'm hoping to be pointed in a direction that will have a fair chance
of being accepted in to the 2.6 kernel if proposed. Depending on what
direction this takes, I or someone else will attempt to implement something.

Here is a detailed list of changes and what they do for the 2.4 example.

mm.h: Add a new action modifier, GFP_ROUND_ROBIN. This modifier is used by
alloc_area_pte in vmalloc.c. If the bit is set, round-robin allocation
is used. Add a function called alloc_pages_round_robin and a macro
alloc_page_round_robin that calls it. These are meant to mirror alloc_page
and alloc_pages.

vmalloc.h: Add function named alloc_big_struct. This function takes the
place of __get_free_pages when we wish to do round-robin allocation.
It takes an order number as input but converts it to a number of bytes
as is needed by __vmalloc. When it calls __vmalloc, it ORs GFP_ROUND_ROBIN
to the gfpmask so alloc_area_pte knows to do round-robin allocation of
memory. If the system isn't NUMA, a macro named alloc_big_struct simply
calls __get_free_pages.

vmalloc.c: alloc_area_pte is adjusted to look for GFP_ROUND_ROBIN. If its
set, alloc_page_round_robin is called. Otherwise, alloc_page is called
like before.

numa.c: page_cache_alloc is modified (inside an ifdef CONFIG_NUMA) to
use the new alloc_page_round_robin support instead of alloc_pages_node.
This avoids code duplication.

tcp.c, buffer.c, inode.c, dcache.c:
When allocating large hash tables, __get_free_pages call replaced with
alloc_big_struct to spread the memory use across nodes.


Testing
-------
On Altix systems of various sizes, we ran aim7 and compared results. We
found almost no difference in performance between the round-robin-enabled
kernels and kernels without this fix implemented.


Big Hash Tables
---------------
As a side point, some of the hash tables allocated during startup get very
large on large-memory systems (systems with a terrabyte of memory for example).
Someone may wish to consider implementing a cap on the size of some of these
tables. My example doesn't address this issue - it just spreads the load.
In fact, I don't have an idea as to what reasonable caps would be on these
tables, if any.


The example patch is attached.

--
Erik Jacobson - Linux System Software - Silicon Graphics - Eagan, Minnesotadiff -Naur -Xskipfiles linux.orig/linux/fs/buffer.c linux/linux/fs/buffer.c
--- linux.orig/linux/fs/buffer.c Wed Aug 20 12:02:29 2003
+++ linux/linux/fs/buffer.c Tue Nov 4 17:53:45 2003
@@ -2842,8 +2842,10 @@
while((tmp >>= 1UL) != 0UL)
bh_hash_shift++;

+ /* alloc_big_struct allocates memory across nodes via vmalloc and
+ * replaces the __get_free_pages call that was here before */
hash_table = (struct buffer_head **)
- __get_free_pages(GFP_ATOMIC, order);
+ alloc_big_struct(GFP_ATOMIC, order);
} while (hash_table == NULL && --order > 0);
printk("Buffer-cache hash table entries: %d (order: %d, %ld bytes)\n",
nr_hash, order, (PAGE_SIZE << order));
diff -Naur -Xskipfiles linux.orig/linux/fs/dcache.c linux/linux/fs/dcache.c
--- linux.orig/linux/fs/dcache.c Thu Jul 31 07:24:27 2003
+++ linux/linux/fs/dcache.c Tue Nov 4 17:53:45 2003
@@ -21,6 +21,7 @@
#include <linux/slab.h>
#include <linux/init.h>
#include <linux/smp_lock.h>
+#include <linux/vmalloc.h>
#include <linux/cache.h>
#include <linux/module.h>

@@ -1225,8 +1226,10 @@
while ((tmp >>= 1UL) != 0UL)
d_hash_shift++;

+ /* alloc_big_struct allocates memory across nodes via vmalloc and
+ * replaces the __get_free_pages call that was here before */
dentry_hashtable = (struct list_head *)
- __get_free_pages(GFP_ATOMIC, order);
+ alloc_big_struct(GFP_ATOMIC, order);
} while (dentry_hashtable == NULL && --order >= 0);

printk(KERN_INFO "Dentry cache hash table entries: %d (order: %ld, %ld bytes)\n",
diff -Naur -Xskipfiles linux.orig/linux/fs/inode.c linux/linux/fs/inode.c
--- linux.orig/linux/fs/inode.c Tue Oct 21 12:14:57 2003
+++ linux/linux/fs/inode.c Tue Nov 4 17:53:45 2003
@@ -15,6 +15,7 @@
#include <linux/cache.h>
#include <linux/swap.h>
#include <linux/swapctl.h>
+#include <linux/vmalloc.h>
#include <linux/prefetch.h>
#include <linux/locks.h>

@@ -1147,8 +1148,10 @@
while ((tmp >>= 1UL) != 0UL)
i_hash_shift++;

+ /* alloc_big_struct allocates memory across nodes via vmalloc and
+ * replaces the __get_free_pages call that was here before */
inode_hashtable = (struct list_head *)
- __get_free_pages(GFP_ATOMIC, order);
+ alloc_big_struct(GFP_ATOMIC, order);
} while (inode_hashtable == NULL && --order >= 0);

printk(KERN_INFO "Inode cache hash table entries: %d (order: %ld, %ld bytes)\n",
diff -Naur -Xskipfiles linux.orig/linux/include/linux/mm.h linux/linux/include/linux/mm.h
--- linux.orig/linux/include/linux/mm.h Tue Oct 14 12:04:02 2003
+++ linux/linux/include/linux/mm.h Tue Nov 4 17:53:45 2003
@@ -498,6 +498,19 @@

#define alloc_page(gfp_mask) alloc_pages(gfp_mask, 0)

+static inline struct page * alloc_pages_round_robin(unsigned int gfp_mask, unsigned int order)
+{
+ /* rotate memory allocation among nodes.
+ * Just needs to be approximate, so no atomic op required.
+ */
+
+ local_cpu_data->page_cache_nodeval++;
+ return alloc_pages_node(local_cpu_data->page_cache_nodeval % numnodes , gfp_mask, 0);
+
+}
+
+#define alloc_page_round_robin(gfp_mask) alloc_pages_round_robin(gfp_mask, 0)
+
extern unsigned long FASTCALL(__get_free_pages(unsigned int gfp_mask, unsigned int order));
extern unsigned long FASTCALL(get_zeroed_page(unsigned int gfp_mask));

@@ -681,6 +694,8 @@
#define __GFP_HIGHIO 0x80 /* Can start high mem physical IO? */
#define __GFP_FS 0x100 /* Can call down to low-level FS? */

+#define GFP_ROUND_ROBIN 0x200 /* Use round-robin node memory allocation */
+
#define GFP_NOHIGHIO (__GFP_HIGH | __GFP_WAIT | __GFP_IO)
#define GFP_NOIO (__GFP_HIGH | __GFP_WAIT)
#define GFP_NOFS (__GFP_HIGH | __GFP_WAIT | __GFP_IO | __GFP_HIGHIO)
diff -Naur -Xskipfiles linux.orig/linux/include/linux/vmalloc.h linux/linux/include/linux/vmalloc.h
--- linux.orig/linux/include/linux/vmalloc.h Fri Jun 27 16:11:51 2003
+++ linux/linux/include/linux/vmalloc.h Tue Nov 4 17:53:45 2003
@@ -64,5 +64,26 @@
extern rwlock_t vmlist_lock;

extern struct vm_struct * vmlist;
+
+/* This function can be used to allocate memory using round-robin node
+ * allocation. It emulates how __get_pages would be called but actually
+ * uses vmalloc. A bit is set in the gfp_mask that instructs the memory
+ * to be allocated in round-robin fashion so as not to burden any one
+ * node. If CONFIG_NUMA isn't set, __get_free_pages is simply used.
+ * This was designed for large memory hash tables and caches such as the
+ * buffer and page cache, some tcp hash tables, etc.
+ */
+#ifdef CONFIG_NUMA
+static inline unsigned long alloc_big_struct(unsigned int gfp_mask, unsigned int order) {
+ if (order > MAX_ORDER)
+ return 0;
+ else
+ /* PAGE_SIZE << order gives us bytes for vmalloc */
+ return (unsigned long) __vmalloc(PAGE_SIZE << order, gfp_mask | GFP_ROUND_ROBIN, PAGE_KERNEL);
+}
+#else
+#define alloc_big_struct(m, o) __get_free_pages(m. o)
+#endif /* CONFIG_NUMA */
+
#endif

diff -Naur -Xskipfiles linux.orig/linux/mm/filemap.c linux/linux/mm/filemap.c
--- linux.orig/linux/mm/filemap.c Thu Oct 30 11:34:32 2003
+++ linux/linux/mm/filemap.c Tue Nov 4 17:53:45 2003
@@ -25,6 +25,7 @@
#include <linux/iobuf.h>

#include <asm/pgalloc.h>
+#include <linux/vmalloc.h>
#include <asm/uaccess.h>
#include <asm/mman.h>

@@ -3328,8 +3329,10 @@
while((tmp >>= 1UL) != 0UL)
page_hash_bits++;

+ /* alloc_big_struct allocates memory across nodes via vmalloc and
+ * replaces the __get_free_pages call that was here before */
page_hash_table = (struct page_cache_bucket *)
- __get_free_pages(GFP_ATOMIC, order);
+ alloc_big_struct(GFP_ATOMIC, order);
} while(page_hash_table == NULL && --order > 0);

printk("Page-cache hash table entries: %d (order: %ld, %ld bytes)\n",
diff -Naur -Xskipfiles linux.orig/linux/mm/numa.c linux/linux/mm/numa.c
--- linux.orig/linux/mm/numa.c Mon Apr 28 20:18:42 2003
+++ linux/linux/mm/numa.c Tue Nov 4 17:53:45 2003
@@ -47,11 +47,9 @@
struct page * page_cache_alloc(struct address_space *x)
{
/*
- * rotate the buffer cache page allocations among nodes
- * just needs to be approximate, so no atomic op required
+ * Use round-robin allocation mechanism
*/
- local_cpu_data->page_cache_nodeval++;
- return alloc_pages_node(local_cpu_data->page_cache_nodeval % numnodes , x->gfp_mask, 0);
+ return alloc_page_round_robin(x->gfp_mask);
}
#endif

diff -Naur -Xskipfiles linux.orig/linux/mm/vmalloc.c linux/linux/mm/vmalloc.c
--- linux.orig/linux/mm/vmalloc.c Thu Jul 31 07:24:27 2003
+++ linux/linux/mm/vmalloc.c Tue Nov 4 17:53:45 2003
@@ -107,7 +107,10 @@

if (!pages) {
spin_unlock(&init_mm.page_table_lock);
- page = alloc_page(gfp_mask);
+ if (gfp_mask & GFP_ROUND_ROBIN)
+ page = alloc_page_round_robin(gfp_mask);
+ else
+ page = alloc_page(gfp_mask);
spin_lock(&init_mm.page_table_lock);
} else {
page = (**pages);
diff -Naur -Xskipfiles linux.orig/linux/net/ipv4/tcp.c linux/linux/net/ipv4/tcp.c
--- linux.orig/linux/net/ipv4/tcp.c Thu Jul 31 07:24:27 2003
+++ linux/linux/net/ipv4/tcp.c Tue Nov 4 17:53:45 2003
@@ -251,6 +251,7 @@
#include <linux/poll.h>
#include <linux/init.h>
#include <linux/smp_lock.h>
+#include <linux/vmalloc.h>
#include <linux/fs.h>
#include <linux/random.h>

@@ -2581,8 +2582,10 @@
tcp_ehash_size >>= 1;
while (tcp_ehash_size & (tcp_ehash_size-1))
tcp_ehash_size--;
+ /* alloc_big_struct allocates memory across nodes via vmalloc and
+ * replaces the __get_free_pages call that was here before */
tcp_ehash = (struct tcp_ehash_bucket *)
- __get_free_pages(GFP_ATOMIC, order);
+ alloc_big_struct(GFP_ATOMIC, order);
} while (tcp_ehash == NULL && --order > 0);

if (!tcp_ehash)
@@ -2598,7 +2601,8 @@
if ((tcp_bhash_size > (64 * 1024)) && order > 0)
continue;
tcp_bhash = (struct tcp_bind_hashbucket *)
- __get_free_pages(GFP_ATOMIC, order);
+ alloc_big_struct(GFP_ATOMIC, order);
+ /* __get_free_pages(GFP_ATOMIC, order); */
} while (tcp_bhash == NULL && --order >= 0);

if (!tcp_bhash)
\
 
 \ /
  Last update: 2005-03-22 13:58    [W:0.059 / U:23.600 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site