lkml.org 
[lkml]   [2015]   [Jan]   [31]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v1 2/2] zram: remove init_lock in zram_make_request
Hello Minchan,
excellent analysis!

On (01/30/15 23:41), Minchan Kim wrote:
> Yes, __srcu_read_lock is a little bit heavier but the number of instruction
> are not too much difference to make difference 10%. A culprit is
> __cond_resched but I don't think, either because our test was CPU intensive
> soS I don't think schedule latency affects total bandwidth.
>
> More cuprit is your data pattern.
> It seems you didn't use scramble_buffers=0, zero_buffers in fio so that
> fio fills random data pattern so zram bandwidth could be different by
> compression/decompression ratio.

Completely agree.
Shame on me. gotten so used to iozone (iozone uses same data pattern 0xA5,
this is +Z option what for), so I didn't even think about data pattern
in fio. sorry.

> 1) randread
> srcu is worse as 0.63% but the difference is really marginal.
>
> 2) randwrite
> srcu is better as 1.24% is better.
>
> 3) randrw
> srcu is better as 2.3%

hm, interesting. I'll re-check.

> Okay, if you concerns on the data still, how about this?

I'm not so upset to lose 0.6234187%. my concerns were about iozone's
10% different (which looks a bit worse).


I'll review your patch. Thanks for your effort.


> >
> > by "data pattern" you mean usage scenario? well, I usually use zram for
> > `make -jX', where X=[4..N]. so N concurrent read-write ops scenario.
>
> What I meant is what data fills I/O buffer, which is really important
> to evaluate zram because the compression/decompression speeds relys on it.
>

I see. I never test it with `make' anyway, only iozone +Z.

-ss


\
 
 \ /
  Last update: 2015-01-31 12:41    [W:0.894 / U:0.008 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site