lkml.org 
[lkml]   [2014]   [Jun]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 6/6] cfq: Increase default value of target_latency
On Thu, Jun 26, 2014 at 12:50:32PM -0400, Jeff Moyer wrote:
> Mel Gorman <mgorman@suse.de> writes:
>
> > On Thu, Jun 26, 2014 at 11:36:50AM -0400, Jeff Moyer wrote:
> >> Right, and I guess I hadn't considered that case as I thought folks used
> >> more than one spinning disk for such workloads.
> >>
> >
> > They probably are but by and large my IO testing is based on simple
> > storage. The reasoning is that if we get the simple case wrong then we
> > probably are getting the complex case wrong too or at least not performing
> > as well as we should. I also don't use SSD on my own machines for the
> > same reason.
>
> A single disk is actually the hard case in this instance, but I
> understand what you're saying. ;-)
>
> >> My main reservation about this change is that you've only provided
> >> numbers for one benchmark.
> >
> > The other obvious one to run would be pgbench workloads but it's a rathole of
> > arguing whether the configuration is valid and whether it's inappropriate
> > to test on simple storage. The tiobench tests alone take a long time to
> > complete -- 1.5 hours on a simple machine, 7 hours on a low-end NUMA machine.
>
> And we should probably run our standard set of I/O exercisers at the
> very least. But, like I said, it seems like wasted effort.
>

Out of curiousity, what do you consider to be the standard set of I/O
exercisers? I have a whole battery of them that are run against major
releases to track performance over time -- tiobench (it's stupid, but too
many people use it), fsmark used in various configurations (single/multi
threaded, zero-sized and large files), postmark (file sizes fairly
small, working set 2xRAM), bonnie++ (2xRAM), ffsb used in a mail server
configuration (taken from btrfs tests), dbench3 (checking in-memory updates,
not a realistic IO benchmark), dbench4 (bit more realistic although high
thread counts it gets silly and overall it's not a stable predictor of
performance), sysbench in various configurations, pgbench used in limited
configurations, stutter which tends to hit the worse-case interactivity
issues experienced on desktops and kernel builds are the main ones. It
takes days to churn through the full set of tests which is why I don't do
it for a patch series. I selected tiobench this time because it was the
most reliable test to cover both single and multiple-sources-of-IO cases.
If I merge a major change I'll usually then watch the next major release
and double check that nothing else broke.

> >> To bump the default target_latency, ideally
> >> we'd know how it affects other workloads. However, I'm having a hard
> >> time justifying putting any time into this for a couple of reasons:
> >> 1) blk-mq pretty much does away with the i/o scheduler, and that is the
> >> future
> >> 2) there is work in progress to convert cfq into bfq, and that will
> >> essentially make any effort put into this irrelevant (so it might be
> >> interesting to test your workload with bfq)
> >>
> >
> > Ok, you've convinced me and I'll drop this patch. For anyone based on
> > kernels from around this time they can tune CFQ or buy a better disk.
> > Hopefully they will find this via Google.
>
> Funny, I wasn't weighing in against your patch. I was merely indicating
> that I personally wasn't going to invest the time to validate it. But,
> if you're ok with dropping it, that's obviously fine with me.
>

I fear the writing is on the wall that it'll never pass the "have you
tested every workload" test and no matter what a counter-example will be
found where it's the wrong setting. If CFQ is going to be irrelevant soon
it's just not worth wasting the electricity against a mainline kernel.

I'm still interested in what you consider your standard set of IO exercisers
though because I can slot any missing parts into the tests that run for
every mainline release.

The main one I'm missing is the postgres folks fsync benchmark. I wrote
the automation months ago but never activated it because there are enough
known problems already.

Thanks.

--
Mel Gorman
SUSE Labs


\
 
 \ /
  Last update: 2014-06-27 05:21    [W:0.986 / U:0.412 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site