lkml.org 
[lkml]   [2017]   [Apr]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v4 2/4] zram: implement deduplication in zram
On Wed, Apr 26, 2017 at 01:28:26PM +0900, Sergey Senozhatsky wrote:
> On (04/26/17 09:52), js1304@gmail.com wrote:
> [..]
> > +struct zram_hash {
> > + spinlock_t lock;
> > + struct rb_root rb_root;
> > };
>
> just a note.
>
> we can easily have N CPUs spinning on ->lock for __zram_dedup_get() lookup,
> which can invole a potentially slow zcomp_decompress() [zlib, for example,
> with 64k pages] and memcmp(). the larger PAGE_SHIFT is, the more serialized
> IOs become. in theory, at least.
>
> CPU0 CPU1 ... CPUN
>
> __zram_bvec_write() __zram_bvec_write() __zram_bvec_write()
> zram_dedup_find() zram_dedup_find() zram_dedup_find()
> spin_lock(&hash->lock);
> spin_lock(&hash->lock); spin_lock(&hash->lock);
> __zram_dedup_get()
> zcomp_decompress()
> ...
>
>
> so may be there is a way to use read-write lock instead on spinlock for hash
> and reduce write/read IO serialization.

In fact, dedup release hash->lock before doing zcomp_decompress(). So,
above contention cannot happen.

However, contention still possible when traversing the rb_tree. If
your fio shows that contention, I will change it to read-write lock.

Thanks.

\
 
 \ /
  Last update: 2017-04-26 08:08    [W:0.138 / U:0.144 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site