lkml.org 
[lkml]   [2006]   [Dec]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: cfq performance gap
On Thu, Dec 07 2006, Avantika Mathur wrote:
> Hi Jens,

(you probably noticed now, but the axboe@suse.de email is no longer
valid)

> I've noticed a performance gap between the cfq scheduler and other io
> schedulers when running the rawio benchmark.
> Results from rawio on 2.6.19, cfq and noop schedulers:
>
> CFQ:
>
> procs device num read KB/sec I/O Ops/sec
> ----- --------------- ---------- ------- --------------
> 16 /dev/sda 16412 8338 2084
> ----- --------------- ---------- ------- --------------
> 16 16412 8338 2084
>
> Total run time 0.492072 seconds
>
>
> NOOP:
>
> procs device num read KB/sec I/O Ops/sec
> ----- --------------- ---------- ------- --------------
> 16 /dev/sda 16399 29224 7306
> ----- --------------- ---------- ------- --------------
> 16 16399 29224 7306
>
> Total run time 0.140284 seconds
>
> The benchmark workload is 16 processes running 4k random reads.
>
> Is this performance gap a known issue?

CFQ could be a little slower at this benchmark, but your results are
much worse than I would expect. What is the queueing depth of sda? How
are you invoking rawio?

Your runtime is very low, how does it look if you allow the test to run
for much longer? 30MiB/sec random read bandwidth seems very high, I'm
wondering what exactly is being tested here.

--
Jens Axboe

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2006-12-08 13:07    [W:0.079 / U:1.068 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site