lkml.org 
[lkml]   [2013]   [Jul]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC PATCH 0/5] Support multiple pages allocation
On Wed, Jul 10, 2013 at 11:17:03AM +0200, Michal Hocko wrote:
> On Wed 10-07-13 09:31:42, Joonsoo Kim wrote:
> > On Thu, Jul 04, 2013 at 12:00:44PM +0200, Michal Hocko wrote:
> > > On Thu 04-07-13 13:24:50, Joonsoo Kim wrote:
> > > > On Thu, Jul 04, 2013 at 12:01:43AM +0800, Zhang Yanfei wrote:
> > > > > On 07/03/2013 11:51 PM, Zhang Yanfei wrote:
> > > > > > On 07/03/2013 11:28 PM, Michal Hocko wrote:
> > > > > >> On Wed 03-07-13 17:34:15, Joonsoo Kim wrote:
> > > > > >> [...]
> > > > > >>> For one page allocation at once, this patchset makes allocator slower than
> > > > > >>> before (-5%).
> > > > > >>
> > > > > >> Slowing down the most used path is a no-go. Where does this slow down
> > > > > >> come from?
> > > > > >
> > > > > > I guess, it might be: for one page allocation at once, comparing to the original
> > > > > > code, this patch adds two parameters nr_pages and pages and will do extra checks
> > > > > > for the parameter nr_pages in the allocation path.
> > > > > >
> > > > >
> > > > > If so, adding a separate path for the multiple allocations seems better.
> > > >
> > > > Hello, all.
> > > >
> > > > I modify the code for optimizing one page allocation via likely macro.
> > > > I attach a new one at the end of this mail.
> > > >
> > > > In this case, performance degradation for one page allocation at once is -2.5%.
> > > > I guess, remained overhead comes from two added parameters.
> > > > Is it unreasonable cost to support this new feature?
> > >
> > > Which benchmark you are using for this testing?
> >
> > I use my own module which do allocation repeatedly.
>
> I am not sure this microbenchmark will tell us much. Allocations are
> usually not short lived so the longer time might get amortized.
> If you want to use the multi page allocation for read ahead then try to
> model your numbers on read-ahead workloads.

Of couse. In later, I will get the result on read-ahead workloads or
vmalloc workload which is recommended by Zhang.

I think, without this microbenchmark, we cannot know this modification's
performance effect to single page allocation accurately. Because the impact
to single page allocation is relatively small and it is easily hidden by
other factors.

Now, I tried several implementation for this feature and found that
separate path also makes single page allocation slower (-1.0~-1.5%).
I didn't find any reason except the fact that
text size of page_alloc.o is 1500 bytes more than before.

Before
text data bss dec hex filename
34466 1389 640 36495 8e8f mm/page_alloc.o

sep
text data bss dec hex filename
36074 1413 640 38127 94ef mm/page_alloc.o

Not yet posted implementation which pass two more arguments to
__alloc_pages_nodemask() also makes single page allocation
(-1.0~-1.5%) slower. So in later, I will work with this implementation,
not separate path implementation.

Thanks for comment!

> --
> Michal Hocko
> SUSE Labs
>
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in
> the body to majordomo@kvack.org. For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>


\
 
 \ /
  Last update: 2013-07-10 23:52    [W:0.079 / U:0.044 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site