lkml.org 
[lkml]   [2008]   [Oct]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: split-lru performance mesurement part2
On Tue,  7 Oct 2008 23:26:54 +0900 (JST)
KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> wrote:

> Hi
>
> > yup,
> > I know many people want to other benchmark result too.
> > I'll try to mesure other bench at next week.
>
> I ran another benchmark today.
> I choice dbench because dbench is one of most famous and real workload like i/o benchmark.
>
>
> % dbench client.txt 4000
>
> mainline: Throughput 13.4231 MB/sec 4000 clients 4000 procs max_latency=1421988.159 ms
> mmotm(*): Throughput 7.0354 MB/sec 4000 clients 4000 procs max_latency=2369213.380 ms
>
> (*) mmotm 2/Oct + Hugh's recently slub fix
>
>
> Wow!
> mmotm is slower than mainline largely (about half performance).
>
> Therefore, I mesured it on "mainline + split-lru(only)" build.
>
>
> mainline + split-lru(only): Throughput 14.4062 MB/sec 4000 clients 4000 procs max_latency=1152231.896 ms
>
>
> OK!
> split-lru outperform mainline from viewpoint of both throughput and latency :)
>
>
>
> However, I don't understand why this regression happend.

erk.

dbench is pretty chaotic and it could be that a good change causes
dbench to get worse. That's happened plenty of times in the past.


> Do you have any suggestion?


One of these:

vmscan-give-referenced-active-and-unmapped-pages-a-second-trip-around-the-lru.patch
vm-dont-run-touch_buffer-during-buffercache-lookups.patch

perhaps?


\
 
 \ /
  Last update: 2008-10-07 22:21    [W:0.076 / U:0.224 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site