lkml.org 
[lkml]   [2016]   [Oct]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 2/3] zram: support page-based parallel write
On (10/24/16 14:58), Minchan Kim wrote:
[..]
> > struct blk_plug_cb *blk_check_plugged(blk_plug_cb_fn unplug, void *data,
> > int size)
> > {
> > struct blk_plug *plug = current->plug;
> > struct blk_plug_cb *cb;
> >
> > if (!plug)
> > return NULL;
> >
> > list_for_each_entry(cb, &plug->cb_list, list)
> > if (cb->callback == unplug && cb->data == data)
> > return cb;
>
> Normally, this routine will check and bail out if it has been plugged
> rightly so it would be not too many allocation in there.
>
> Having said that, there is no need to allocate cb in block layer.
> drive can allocate one time and reuse it with passing it to the
> blk_check_plugged. I was tempted to introduce the API into block layer
> but it was just optimization/easy stuff once this patchset settle down
> so I didn't consider in this patchset.

aha. thanks.

> > > We have been used sysfs for tune the zram for a long time.
> > > Please suggest ideas if you have better. :)
> >
> > yeah, but this one feels like a super-hacky knob. basically
> >
> > "enable when you can't tweak your usage patterns. this will tweak the driver".
> >
> > so I'd probably prefer to keep it hidden for now (may be eventually
> > we will come to some "out-of-zram" solution. but the opposition may
> > be "fix your usage pattern").
>
> Frankly speaking, I tend to agree.
>
> As I mentioned in cover-letter or somethine, I don't want to make this knob.
> A option is we admit it's trade-off. So, if someone enables this config,
> he will lost random/direct IO performance at this moment while he can get much
> benefit buffered sequential read/write.
> What do you think?

yes, sounds like it. a config option, probably with a big-big warning
sign and no sysfs knob.

> > so this knob is not even guaranteed to be there all the time.
> >
> > I wish I could suggest any sound alternative, but I don't have one
> > at the moment. May be I'll have a chance to speak to block-dev people
> > next week.
>
> Okay. But I think it's not a good idea to hurt wb context you mentioned.
> IOW, IO queuing could be parallelized by multiple wb context but
> servicing(i.e., compression) should be done in zram contexts, not
> wb context.

yep. too many things can go wrong. we can schedule requests on a
different die/package/socket, probably pressuring data caches and
then there are NUMA systems, and so on and on and on. so I can
easily imagine a "fix your user space" response.

-ss

\
 
 \ /
  Last update: 2016-10-24 09:24    [W:0.065 / U:0.168 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site