lkml.org 
[lkml]   [2007]   [Apr]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
SubjectRe: SMP performance degradation with sysbench
From
Date
On Tue, 2007-03-20 at 10:29 +0800, Zhang, Yanmin wrote:
> On Wed, 2007-03-14 at 16:33 -0700, Siddha, Suresh B wrote:
> > On Tue, Mar 13, 2007 at 05:08:59AM -0700, Nick Piggin wrote:
> > > I would agree that it points to MySQL scalability issues, however the
> > > fact that such large gains come from tcmalloc is still interesting.
> >
> > What glibc version are you, Anton and others are using?
> >
> > Does that version has this fix included?
> >
> > Dynamically size mmap treshold if the program frees mmaped blocks.
> >
> > http://sources.redhat.com/cgi-bin/cvsweb.cgi/libc/malloc/malloc.c.diff?r1=1.158&r2=1.159&cvsroot=glibc

> The *ROOT CAUSE* is dynamic thresholds don’t apply to non-main arena.
>
> To verify my idea, I created a small patch. When freeing a block, always
> check mp_.trim_threshold even though it might not be in main arena. The
> patch is just to verify my idea instead of the final solution.
>
> --- glibc-2.5-20061008T1257_bak/malloc/malloc.c 2006-09-08 00:06:02.000000000 +0800
> +++ glibc-2.5-20061008T1257/malloc/malloc.c 2007-03-20 07:41:03.000000000 +0800
> @@ -4607,10 +4607,13 @@ _int_free(mstate av, Void_t* mem)
> } else {
> /* Always try heap_trim(), even if the top chunk is not
> large, because the corresponding heap might go away. */
> + if ((unsigned long)(chunksize(av->top)) >=
> + (unsigned long)(mp_.trim_threshold)) {
> heap_info *heap = heap_for_ptr(top(av));
>
> assert(heap->ar_ptr == av);
> heap_trim(heap, mp_.top_pad);
> + }
> }
> }
>
>
I sent a new patch to glibc maintainer, but didn't get response. So resend it here.

Glibc arena is to decrease the malloc/free contention among threads. But arena
chooses to shrink agressively, so also grow agressively. When heaps grow, mprotect
is called. When heaps shrink, mmap is called. In kernel, both mmap and mprotect
need hold the write lock of mm->mmap_sem which introduce new contention. The new
contention actually causes the arena effort to become 0.

Here is a new patch to address this issue.

Signed-off-by: Zhang Yanmin <yanmin.zhang@intel.com>

---

--- glibc-2.5-20061008T1257_bak/malloc/malloc.c 2006-09-08 00:06:02.000000000 +0800
+++ glibc-2.5-20061008T1257/malloc/malloc.c 2007-03-30 09:01:18.000000000 +0800
@@ -4605,12 +4605,13 @@ _int_free(mstate av, Void_t* mem)
sYSTRIm(mp_.top_pad, av);
#endif
} else {
- /* Always try heap_trim(), even if the top chunk is not
- large, because the corresponding heap might go away. */
- heap_info *heap = heap_for_ptr(top(av));
-
- assert(heap->ar_ptr == av);
- heap_trim(heap, mp_.top_pad);
+ if ((unsigned long)(chunksize(av->top)) >=
+ (unsigned long)(mp_.trim_threshold)) {
+ heap_info *heap = heap_for_ptr(top(av));
+
+ assert(heap->ar_ptr == av);
+ heap_trim(heap, mp_.top_pad);
+ }
}
}

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
\
 
 \ /
  Last update: 2007-04-02 05:01    [W:1.811 / U:0.060 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site