lkml.org 
[lkml]   [2017]   [Apr]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v4 2/4] zram: implement deduplication in zram
Hello,

On (04/27/17 15:57), Joonsoo Kim wrote:
[..]
> I tested with your benchmark and found that contention happens
> since the data page is perfectly the same. All the written data (2GB)
> is de-duplicated.

yes, a statically filled buffer to guarantee that
compression/decompression numbers/impact will be stable.
otherwise the test results are "apples vs oranges" :)

> I tried to optimize it with read-write lock but I failed since
> there is another contention, which cannot be fixed simply. That is
> zsmalloc. We need to map the object and compare the content of the
> compressed page to check de-duplication. Zsmalloc pins the object
> by using bit spinlock when mapping. So, parallel readers to the same
> object contend here.
>
> I think that this case is so artificial and, in practice, there
> would be no case that the same data page is repeatedly and parallel
> written as like this. So, I'd like to keep current code. How do you
> think about it, Sergey?

I agree. thanks for taking a look!
I see no blockers for the patch set.


<off topic>

ok, in general, seems that (correct me if I'm wrong)

a) the higher the dedup ratio the slower zram _can_ perform.

because dedup can create parallel access scenarios where they previously
never existed: different offset writes now can compete for the same dedupped
zsmalloc object.

and... tricky and probably over exaggerated

b) the lower the dedup ratio the slower zram _can_ perform.

think of almost full zram device with dedup ratio of just 3-5%. tree lookups
are serialized by the hash->lock. a balanced tree gives us slow lookup
complexity growth, it's still there but can leave with it. at the same time
low dedup ratio means that we have wasted CPU cycles on checksum calculation
(potentially for millions of pages if zram device in question is X gigabytes
in size), this can't go unnoticed.


it's just I was slightly confused by the performance numbers that you
have observed. some tests were

: It shows performance degradation roughly 13% and save 24% memory. Maybe,
: it is due to overhead of calculating checksum and comparison.

while others were

: There is no performance degradation and save 23% memory.


I understand that you didn't perform direct io, flush, fsync, etc. and
there is a whole bunch of factors that could have affected your tests,
e.g. write back, etc. etc. but the numbers are still very unstable.
may be now we will have a bit better understanding :)

</off topic>

> Just note, if we do parallel read (direct-io) to the same offset,
> zsmalloc contention would happen regardless deduplication feature.
> It seems that it's fundamental issue in zsmalloc.

that's a good find.

-ss

\
 
 \ /
  Last update: 2017-04-27 09:47    [W:0.064 / U:0.520 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site