lkml.org 
[lkml]   [2014]   [May]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH] zram: remove global tb_lock by using lock-free CAS
From
Date
On Mon, 2014-05-12 at 14:15 +0900, Minchan Kim wrote:
> On Sat, May 10, 2014 at 02:10:08PM +0800, Weijie Yang wrote:
> > On Thu, May 8, 2014 at 2:24 PM, Minchan Kim <minchan@kernel.org> wrote:
> > > On Wed, May 07, 2014 at 11:52:59PM +0900, Joonsoo Kim wrote:
> > >> >> Most popular use of zram is the in-memory swap for small embedded system
> > >> >> so I don't want to increase memory footprint without good reason although
> > >> >> it makes synthetic benchmark. Alhought it's 1M for 1G, it isn't small if we
> > >> >> consider compression ratio and real free memory after boot
> > >>
> > >> We can use bit spin lock and this would not increase memory footprint for 32 bit
> > >> platform.
> > >
> > > Sounds like a idea.
> > > Weijie, Do you mind testing with bit spin lock?
> >
> > Yes, I re-test them.
> > This time, I test each case 10 times, and take the average(KS/s).
> > (the test machine and method are same like previous mail's)
> >
> > Iozone test result:
> >
> > Test BASE CAS spinlock rwlock bit_spinlock
> > --------------------------------------------------------------
> > Initial write 1381094 1425435 1422860 1423075 1421521
> > Rewrite 1529479 1641199 1668762 1672855 1654910
> > Read 8468009 11324979 11305569 11117273 10997202
> > Re-read 8467476 11260914 11248059 11145336 10906486
> > Reverse Read 6821393 8106334 8282174 8279195 8109186
> > Stride read 7191093 8994306 9153982 8961224 9004434
> > Random read 7156353 8957932 9167098 8980465 8940476
> > Mixed workload 4172747 5680814 5927825 5489578 5972253
> > Random write 1483044 1605588 1594329 1600453 1596010
> > Pwrite 1276644 1303108 1311612 1314228 1300960
> > Pread 4324337 4632869 4618386 4457870 4500166
> >
> > Fio test result:
> >
> > Test base CAS spinlock rwlock bit_spinlock
> > -------------------------------------------------------------
> > seq-write 933789 999357 1003298 995961 1001958
> > seq-read 5634130 6577930 6380861 6243912 6230006
> > seq-rw 1405687 1638117 1640256 1633903 1634459
> > rand-rw 1386119 1614664 1617211 1609267 1612471
> >
> >
> > The base is v3.15.0-rc3, the others are per-meta entry lock.
> > Every optimization method shows higher performance than the base, however,
> > it is hard to say which method is the most appropriate.
>
> It's not too big between CAS and bit_spinlock so I prefer general method.

Well, I imagine that's because the test system is small enough that the
lock is not stressed enough. Bit spinlocks are considerably slower than
other types. I'm not sure if we really care for the case of zram, but in
general I really dislike this lock. It suffers from just about
everything our regular spinlocks try to optimize, specially unfairness
in who gets the lock when contended (ticketing).

Thanks,
Davidlohr



\
 
 \ /
  Last update: 2014-05-12 17:21    [W:0.669 / U:0.100 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site