lkml.org 
[lkml]   [2015]   [Dec]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC 0/3] reduce latency of direct async compaction
On Wed, Dec 09, 2015 at 01:40:06PM +0800, Aaron Lu wrote:
> On Wed, Dec 09, 2015 at 09:33:53AM +0900, Joonsoo Kim wrote:
> > On Tue, Dec 08, 2015 at 04:52:42PM +0800, Aaron Lu wrote:
> > > On Tue, Dec 08, 2015 at 03:51:16PM +0900, Joonsoo Kim wrote:
> > > > I add work-around for this problem at isolate_freepages(). Please test
> > > > following one.
> > >
> > > Still no luck and the error is about the same:
> >
> > There is a mistake... Could you insert () for
> > cc->free_pfn & ~(pageblock_nr_pages-1) like as following?
> >
> > cc->free_pfn == (cc->free_pfn & ~(pageblock_nr_pages-1))
>
> Oh right, of course.
>
> Good news, the result is much better now:
> $ cat {0..8}/swap
> cmdline: /lkp/aaron/src/bin/usemem 100064603136
> 100064603136 transferred in 72 seconds, throughput: 1325 MB/s
> cmdline: /lkp/aaron/src/bin/usemem 100072049664
> 100072049664 transferred in 74 seconds, throughput: 1289 MB/s
> cmdline: /lkp/aaron/src/bin/usemem 100070246400
> 100070246400 transferred in 92 seconds, throughput: 1037 MB/s
> cmdline: /lkp/aaron/src/bin/usemem 100069545984
> 100069545984 transferred in 81 seconds, throughput: 1178 MB/s
> cmdline: /lkp/aaron/src/bin/usemem 100058895360
> 100058895360 transferred in 78 seconds, throughput: 1223 MB/s
> cmdline: /lkp/aaron/src/bin/usemem 100066074624
> 100066074624 transferred in 94 seconds, throughput: 1015 MB/s
> cmdline: /lkp/aaron/src/bin/usemem 100062855168
> 100062855168 transferred in 77 seconds, throughput: 1239 MB/s
> cmdline: /lkp/aaron/src/bin/usemem 100060990464
> 100060990464 transferred in 73 seconds, throughput: 1307 MB/s
> cmdline: /lkp/aaron/src/bin/usemem 100064996352
> 100064996352 transferred in 84 seconds, throughput: 1136 MB/s
> Max: 1325 MB/s
> Min: 1015 MB/s
> Avg: 1194 MB/s

Nice result! Thanks for testing.
I will make a proper formatted patch soon.

Then, your concern is solved? I think that performance of
always-always on this test case can't follow up performance of
always-never because migration cost to make hugepage is additionally
charged to always-always case. Instead, it will have more hugepage
mapping and it may result in better performance in some situation.
I guess that it is intention of that option.

Thanks.


\
 
 \ /
  Last update: 2015-12-10 06:01    [W:0.093 / U:0.220 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site