lkml.org 
[lkml]   [2002]   [Nov]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [BENCHMARK] 2.4.{18,19{-ck9},20rc1{-aa1}} with contest
On Mon, Nov 11, 2002 at 11:45:06AM -0200, Rik van Riel wrote:
> On Mon, 11 Nov 2002, Andrea Arcangeli wrote:
>
> > [snip bad example by somebody who hasn't read Andrew's patch]
>
> > Anybody claiming there isn't the potential of a global I/O throughput
> > slowdown would be clueless.
>
> IO throughput isn't the point. Due to the fundamental asymmetry

IO throughput is the whole point of the elevator and if you change it
that way, you can decrease it, even if you put at the seventh request
instead of the first, you're making assumption that the reads cannot
keep the I/O pipeline full, this is a realistic assumption for some
workloads, but not all. My example still very much apply, just not at
the head but as the seventh request. I definitely known what is the
design idea behind read-latency unlike what you think, I just didn't
remeber the lowlevel implementation details which are not important in
terms of a pontential slowdown in math theorical terms.

> On the contrary, the decrease of latency will probably bring a
> global throughput increase. Just program throughput, not raw

I perfectly know this, but you're making assumptions about certain
workloads, I can agree they are realistic workloads on a desktop
machine though, but not all the workloads are like that.

> That must be why it was backed out ;)

it was backed out because the request size must be big and it couldn't
be big with such ""feature"" enabled, as I just said in my previous
email. I just given you the reason it was backed out, not sure what are
you wondering about.

The fact is that read-latency is an hack for getting a special case
faster and that definitely in theory can hurt some workload, there is a
reason read-latency isn't the default, read-latency definitely *can*
increase the seeks, not admitting this and claiming it can only improve
performance is clueless from your part. the implementation detail that
it is adding as the seventh request instead of as the first request
decreases the probability of a slowdown, but it still has the potential
of slowing down something, this is all about math local to the elevator.

And IMHO read-latency kinds of hide the real problem that is we should
limit the queue in bytes or we could delay after I/O completion as
mentioned by Andrew since certain workloads will be still very much
slower than writes even with read-latency. I'll fix soon the real
problem in my tree, I just need to make a number of benchmarks on SCSI
and IDE to kind of measure a good size in bytes for peak contigous I/O
performance before I can implement that.

Andrea
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:30    [W:0.063 / U:0.036 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site