lkml.org 
[lkml]   [2009]   [Jun]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 0/11] Per-bdi writeback flusher threads v9
On Sat, Jun 06 2009, Frederic Weisbecker wrote:
> On Sat, Jun 06, 2009 at 02:23:40AM +0200, Jan Kara wrote:
> > On Fri 05-06-09 20:18:15, Chris Mason wrote:
> > > On Fri, Jun 05, 2009 at 11:14:38PM +0200, Jan Kara wrote:
> > > > On Fri 05-06-09 21:15:28, Jens Axboe wrote:
> > > > > On Fri, Jun 05 2009, Frederic Weisbecker wrote:
> > > > > > The result with noop is even more impressive.
> > > > > >
> > > > > > See: http://kernel.org/pub/linux/kernel/people/frederic/dbench-noop.pdf
> > > > > >
> > > > > > Also a comparison, noop with pdflush against noop with bdi writeback:
> > > > > >
> > > > > > http://kernel.org/pub/linux/kernel/people/frederic/dbench-noop-cmp.pdf
> > > > >
> > > > > OK, so things aren't exactly peachy here to begin with. It may not
> > > > > actually BE an issue, or at least now a new one, but that doesn't mean
> > > > > that we should not attempt to quantify the impact.
> > > > What looks interesting is also the overall throughput. With pdflush we
> > > > get to 2.5 MB/s + 26 MB/s while with per-bdi we get to 2.7 MB/s + 13 MB/s.
> > > > So per-bdi seems to be *more* fair but throughput suffers a lot (which
> > > > might be inevitable due to incurred seeks).
> > > > Frederic, how much does dbench achieve for you just on one partition
> > > > (test both consecutively if possible) with as many threads as have those
> > > > two dbench instances together? Thanks.
> > >
> > > Is the graph showing us dbench tput or disk tput? I'm assuming it is
> > > disk tput, so bdi may just be writing less?
> > Good, question. I was assuming dbench throughput :).
> >
> > Honza
>
>
> Yeah it's dbench. May be that's not the right tool to measure the writeback
> layer, even though dbench results are necessarily influenced by the writeback
> behaviour.
>
> May be I should use something else?
>
> Note that if you want I can put some surgicals trace_printk()
> in fs/fs-writeback.c

FWIW, I ran a similar test here just now. CFQ was used, two partitions
on an (otherwise) idle drive. I used 30 clients per dbench and 600s
runtime. Results are nearly identical, both throughout the run and
total:

/dev/sdb1
Throughput 165.738 MB/sec 30 clients 30 procs max_latency=459.002 ms

/dev/sdb2
Throughput 165.773 MB/sec 30 clients 30 procs max_latency=607.198 ms

The flusher threads see very little exercise here.

--
Jens Axboe



\
 
 \ /
  Last update: 2009-06-08 11:27    [W:0.093 / U:0.260 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site