lkml.org 
[lkml]   [2016]   [Mar]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] zram: export the number of available comp streams
Hello Sergey,

On Mon, Mar 21, 2016 at 04:51:28PM +0900, Sergey Senozhatsky wrote:
> Hello Minchan,
>
> On (03/18/16 10:25), Minchan Kim wrote:
> [..]
> > > aha, ok.
> > >
> > > > (ie, simple code, removing
> > > > max_comp_streams knob, no need to this your stat, guarantee parallel
> > > > level, guarantee consumed memory space).
> > >
> > > I'll take a look and prepare some numbers (most likely next week).
> >
> > Sounds great to me!
>
> so I have schematically this thing now. streams are per-cpu and contain
> scratch buffer and work mem.
>
> zram_bvec_write()
> {
> *get_cpu_ptr(comp->stream);
> zcomp_compress();
> zs_malloc()
> put_cpu_ptr(comp->stream);
> }
>
> this, however, makes zsmalloc unhapy. pool has GFP_NOIO | __GFP_HIGHMEM
> gfp, and GFP_NOIO is ___GFP_DIRECT_RECLAIM|___GFP_KSWAPD_RECLAIM. this
> __GFP_DIRECT_RECLAIM is in the conflict with per-cpu streams, because
> per-cpu streams require disabled preemption (up until we copy stream
> buffer to zspage). so what options do we have here... from the top of
> my head (w/o a lot of thinking)...

Indeed.

> -- remove __GFP_DIRECT_RECLAIM from pool gfp mask, which a bit is risky...
> IOW, make pool gfp '___GFP_KSWAPD_RECLAIM | __GFP_HIGHMEM'

Yeb. It would be okay for zram-swap but not zram-blk.

> -- kmalloc/kfree temp buffer for every RW op, which is ugly... because
> it sort of voids the whole purpose of per-cpu streams.

How about this?

zram_bvec_write()
{
retry:
*get_cpu_ptr(comp->stream);
zcomp_compress();
handle = zs_malloc((gfp &~ __GFP_DIRECT_RECLAIM| | GFP_NOWARN)
if (!handle) {
put_cpu_ptr(comp->stream);
handle = zs_malloc(gfp);
goto retry;
}
put_cpu_ptr(comp->stream);
}

If per-cpu model really performance win, it is worth to try.


\
 
 \ /
  Last update: 2016-03-22 02:01    [W:0.159 / U:0.348 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site