lkml.org 
[lkml]   [2016]   [Mar]   [31]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: zram: per-cpu compression streams
Hello Sergey,

On Thu, Mar 31, 2016 at 10:26:26AM +0900, Sergey Senozhatsky wrote:
> Hello,
>
> On (03/31/16 07:12), Minchan Kim wrote:
> [..]
> > > I used a bit different script. no `buffer_compress_percentage' option,
> > > because it provide "a mix of random data and zeroes"
> >
> > Normally, zram's compression ratio is 3 or 2 so I used it.
> > Hmm, isn't it more real practice usecase?
>
> this option guarantees that the supplied to zram data will have
> a requested compression ratio? hm, but we never do that in real
> life, zram sees random data.

I agree it's hard to create such random read data with benchmark.
One option is that we share swap dump data of real product, for exmaple,
android or webOS and feed it to the benchmark. But as you know, it
cannot cover all of workload, either. So, to just easy test, I wanted
to make represntative compression ratio data and fio provides option
for it via buffer_compress_percentage.
It would be better rather than feeding random data which could make
lots of noise for each test cycle.

>
> > If we don't use buffer_compress_percentage, what's the content in the buffer?
>
> that's a good question. I quickly looked into the fio source code,
> we need to use "buffer_pattern=str" option, I think. so the buffers
> will be filled with the same data.
>
> I don't mind to have buffer_compress_percentage as a separate test (set
> as a local test option), but I think that using common buffer pattern
> adds more confidence when we compare test results.

If we both uses same "buffer_compress_percentage=something", it's
good to compare. The benefit of buffer_compress_percentage is we can
change compression ratio easily in zram testing and see various
test to see what compression ratio or speed affects the system.

>
> [..]
> > > hm, but I guess it's not enough; fio probably will have different
> > > data (well, only if we didn't ask it to zero-fill the buffers) for
> > > different tests, causing different zram->zsmalloc behaviour. need
> > > to check it.
> [..]
> > > #jobs4
> > > READ: 8720.4MB/s 7301.7MB/s 7896.2MB/s
> > > READ: 7510.3MB/s 6690.1MB/s 6456.2MB/s
> > > WRITE: 2211.6MB/s 1930.8MB/s 2713.9MB/s
> > > WRITE: 2002.2MB/s 1629.8MB/s 2227.7MB/s
> >
> > Your case is 40% win. It's huge, Nice!
> > I tested with your guide line(i.e., no buffer_compress_percentage,
> > scramble_buffers=0) but still 10% enhance in my machine.
> > Hmm,,,
> >
> > How about if you test my fio job.file in your machine?
> > Still, it's 40% win?
>
> I'll retest with new config.
>
> > Also, I want to test again in your exactly same configuration.
> > Could you tell me zram environment(ie, disksize, compression
> > algorithm) and share me your job.file of fio?
>
> sure.

I tested with you suggested parameter.
In my side, win is better compared to my previous test but it seems
your test is so fast. IOW, filesize is small and loops is just 1.
Please test filesize=500m loops=10 or 20.
It can make your test more stable and enhance is 10~20% in my side.
Let's discuss further once test result between us is consistent.

Thanks.

>
> 3G, lzo
>
>
> --- my fio-template is
>
> [global]
> bs=4k
> ioengine=sync
> direct=1
> size=__SIZE__
> numjobs=__JOBS__
> group_reporting
> filename=/dev/zram0
> loops=1
> buffer_pattern=0xbadc0ffee
> scramble_buffers=0
>
> [seq-read]
> rw=read
> stonewall
>
> [rand-read]
> rw=randread
> stonewall
>
> [seq-write]
> rw=write
> stonewall
>
> [rand-write]
> rw=randwrite
> stonewall
>
> [mixed-seq]
> rw=rw
> stonewall
>
> [mixed-rand]
> rw=randrw
> stonewall
>
>
> #separate test with
> #buffer_compress_percentage=50
>
>
>
> --- my create-zram script is as follows.
>
>
> #!/bin/sh
>
> rmmod zram
> modprobe zram
>
> if [ -e /sys/block/zram0/initstate ]; then
> initdone=`cat /sys/block/zram0/initstate`
> if [ $initdone = 1 ]; then
> echo "init done"
> exit 1
> fi
> fi
>
> echo 8 > /sys/block/zram0/max_comp_streams
>
> echo lzo > /sys/block/zram0/comp_algorithm
> cat /sys/block/zram0/comp_algorithm
>
> cat /sys/block/zram0/max_comp_streams
> echo $1 > /sys/block/zram0/disksize
>
>
>
>
>
> --- and I use it as
>
>
> #!/bin/sh
>
> DEVICE_SZ=$((3 * 1024 * 1024 * 1024))
> FREE_SPACE=$(($DEVICE_SZ / 10))
> LOG=/tmp/fio-zram-test
> LOG_SUFFIX=$1
>
> function reset_zram
> {
> rmmod zram
> }
>
> function create_zram
> {
> ./create-zram $DEVICE_SZ
> }
>
> function main
> {
> local j
> local i
>
> if [ "z$LOG_SUFFIX" = "z" ]; then
> LOG_SUFFIX="UNSET"
> fi
>
> LOG=$LOG-$LOG_SUFFIX
>
> for i in {1..10}; do
> reset_zram
> create_zram
>
> cat fio-test-template | sed s/__JOBS__/$i/ | sed s/__SIZE__/$((($DEVICE_SZ/$i - $FREE_SPACE)/(1024*1024)))M/ > fio-test
> echo "#jobs$i" >> $LOG
> time fio ./fio-test >> $LOG
> done
>
> reset_zram
> }
>
> main
>
>
>
>
> -- then I use this simple script
>
> #!/bin/sh
>
> if [ "z$2" = "z" ]; then
> cat $1 | egrep "#jobs|READ|WRITE" | awk '{printf "%-15s %15s\n", $1, $3}' | sed s/aggrb=// | sed s/,//
> else
> cat $1 | egrep "#jobs|READ|WRITE" | awk '{printf " %-15s\n", $3}' | sed s/aggrb=// | sed s/\#jobs[0-9]*// | sed s/,//
> fi
>
>
>
>
> as
>
> ./squeeze.sh fio-zram-test-4-stream > 4s
> ./squeeze.sh fio-zram-test-8-stream A > 8s
> ./squeeze.sh fio-zram-test-per-cpu A > pc
>
> and
>
> paste 4s 8s pc > result
>
>
> -ss

\
 
 \ /
  Last update: 2016-03-31 08:21    [W:0.377 / U:0.016 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site