lkml.org 
[lkml]   [2011]   [Mar]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
SubjectRe: [PATCH]mmap: add alignment for some variables
From
Date
On Wed, 2011-03-30 at 10:10 +0800, Andrew Morton wrote:
> On Wed, 30 Mar 2011 09:54:01 +0800 Shaohua Li <shaohua.li@intel.com> wrote:
>
> > On Wed, 2011-03-30 at 09:41 +0800, Andrew Morton wrote:
> > > On Wed, 30 Mar 2011 09:36:40 +0800 Shaohua Li <shaohua.li@intel.com> wrote:
> > >
> > > > > how is it that this improves things?
> > > > Hmm, it actually is:
> > > > struct percpu_counter {
> > > > spinlock_t lock;
> > > > s64 count;
> > > > #ifdef CONFIG_HOTPLUG_CPU
> > > > struct list_head list; /* All percpu_counters are on a list */
> > > > #endif
> > > > s32 __percpu *counters;
> > > > } __attribute__((__aligned__(1 << (INTERNODE_CACHE_SHIFT))))
> > > > so lock and count are in one cache line.
> > >
> > > ____cacheline_aligned_in_smp would achieve that?
> > ____cacheline_aligned_in_smp can't guarantee the cache alignment for
> > multiple nodes, because the variable can be updated by multiple
> > nodes/cpus.
>
> Confused. If an object is aligned at a mulitple-of-128 address on one
> node, it is aligned at a multiple-of-128 address when viewed from other
> nodes, surely?
>
> Even if the cache alignment to which you're referring is the internode
> cache, can a 34-byte, L1-cache-aligned structure ever span multiple
> internode cachelines?
ah, you are right, ____cacheline_aligned_in_smp is enough here, thanks
for correcting me here.

Make some variables have correct alignment/section to avoid cache issue.
In a workload which heavily does mmap/munmap, the variables will be used
frequently.

Signed-off-by: Shaohua Li <shaohua.li@intel.com>
---
mm/mmap.c | 10 +++++++---
1 file changed, 7 insertions(+), 3 deletions(-)

Index: linux/mm/mmap.c
===================================================================
--- linux.orig/mm/mmap.c 2011-03-30 08:45:05.000000000 +0800
+++ linux/mm/mmap.c 2011-03-30 10:34:36.000000000 +0800
@@ -84,10 +84,14 @@ pgprot_t vm_get_page_prot(unsigned long
}
EXPORT_SYMBOL(vm_get_page_prot);

-int sysctl_overcommit_memory = OVERCOMMIT_GUESS; /* heuristic overcommit */
-int sysctl_overcommit_ratio = 50; /* default is 50% */
+int sysctl_overcommit_memory __read_mostly = OVERCOMMIT_GUESS; /* heuristic overcommit */
+int sysctl_overcommit_ratio __read_mostly = 50; /* default is 50% */
int sysctl_max_map_count __read_mostly = DEFAULT_MAX_MAP_COUNT;
-struct percpu_counter vm_committed_as;
+/*
+ * Make sure vm_committed_as in one cacheline and not cacheline shared with
+ * other variables. It can be updated by several CPUs frequently.
+ */
+struct percpu_counter vm_committed_as ____cacheline_aligned_in_smp;

/*
* Check that a process has enough memory to allocate a new virtual




\
 
 \ /
  Last update: 2011-03-30 04:39    [W:0.048 / U:0.064 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site