lkml.org 
[lkml]   [2000]   [Aug]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patches in this message
/
Date
From
SubjectRe: Degrading disk read performance under 2.2.16
On Mon, 14 Aug 2000, Corin Hartland-Swann wrote:

>I have tried this out, and found that the default settings were:
>elevator ID=232 read_latency=128 write_latency=8192 max_bomb_segments=4

(side note: Jens increased bomb segments to 32 in recent 2.2.17)

I think we can apply this patch on top of the recent 2.2.17:

--- 2.2.17pre11/include/linux/blkdev.h Wed Aug 2 19:33:35 2000
+++ /tmp/blkdev.h Mon Aug 14 17:21:14 2000
@@ -48,9 +48,9 @@

#define ELEVATOR_DEFAULTS \
((elevator_t) { \
- 128, /* read_latency */ \
- 8192, /* write_latency */ \
- 32, /* max_bomb_segments */ \
+ 50000, /* read_latency */ \
+ 100000, /* write_latency */ \
+ 128, /* max_bomb_segments */ \
0 /* queue_ID */ \
})

Corin could you test the _below_ patch on top of 2.2.16? (please don't
change to 2.2.17pre while doing the benchmarks because I don't want to add
other variables to the equation)

--- 2.2.16/include/linux/blkdev.h Thu Jun 22 00:07:54 2000
+++ /tmp/blkdev.h Mon Aug 14 17:21:14 2000
@@ -48,9 +48,9 @@

#define ELEVATOR_DEFAULTS \
((elevator_t) { \
- 128, /* read_latency */ \
- 8192, /* write_latency */ \
- 4, /* max_bomb_segments */ \
+ 50000, /* read_latency */ \
+ 100000, /* write_latency */ \
+ 128, /* max_bomb_segments */ \
0 /* queue_ID */ \
})

The rason read latency was so small is that it was avoiding completly
large writes to stall reads. But that was too much aggressive against
throughput and what people only cares is that one task can't be starved
for 49hours (or more or less depending on the speed of the disk) as it
could happen in real life with the 2.2.15 elevator.

The current 2.2.17pre code is also calculating the latency in function of
the position the request is placed in the queue. This logic can be turned
off as well as we giveup in making a background write not visible. (that's
not a big deal with the above settings, but it could save some minor CPU
cycle)

There's no way to conciliate the two worlds (at least without user
intervention). If we want to make a background write not visible, due the
nature of reads and/or of paging we must do an huge number of seeks at
high frequency. And doing that means screwing up benchmark numbers. And
everybody cares about benchmarks. So I just giveup with providing good
latency as default (you have to tune by hand to do that) but I only care
to avoid the 49hours starvation case :)), that was the real bugfix after
all.

>Can anyone point me to any documentation on this elevator code? I was
>wondering what exactly the different variables do...
>
>I found that increasing the read_latency to match write_latency fixes the
>problems perfectly - thanks for the pointer!
>
>As a recap, here are the 2.2.15 (last known good) results:
>
>==> 2.2.15 <==
>----- ------ ------- ---- ------------- -------------- --------------
> Dir Size BlkSz Thr# Read (CPU%) Write (CPU%) Seeks (CPU%)
>----- ------ ------- ---- ------------- -------------- --------------
>/mnt/ 256 4096 1 27.1371 10.3% 26.7979 23.0% 146.187 0.95%
>/mnt/ 256 4096 2 27.1219 10.7% 26.6606 23.2% 142.233 0.60%
>/mnt/ 256 4096 4 26.9915 10.6% 26.4289 22.9% 142.789 0.50%
>/mnt/ 256 4096 16 26.4320 10.5% 26.1310 23.0% 147.424 0.52%
>/mnt/ 256 4096 32 25.3407 10.1% 25.6822 22.7% 150.750 0.57%

Please apply the second patch in this email to 2.2.16 and show what the
tiotest produces.

Ah and btw, ignore completly the "Seeks" column, such part of benchark is
broken. I fixed it on my tree for my local benchmarks here, here's the
fix:

--- tiobench-0.3.1/tiotest.c.~1~ Sun May 14 15:48:51 2000
+++ tiobench-0.3.1/tiotest.c Mon Aug 14 17:27:19 2000
@@ -858,7 +858,7 @@
unsigned int seed;
struct timeval r;

- if(gettimeofday( &r, NULL ) == 0)
+ if(0 && gettimeofday( &r, NULL ) == 0)
{
seed = r.tv_usec;
}
(note the above only means the "seek" column gets a sense on your local
machine without changing glibc, if you repalce the pseudo random number
generator in glibc between the runs of tiotest than the numbers can't be
compared anymore)

Also make sure to run such benchmark on a fresh filesystem always after a
mke2fs (What I usually do is to run mke2fs, reboot and run the benchmark
in single user mode).

>Now, does anyone (Andrea in particular) know where the defaults are set? I

See the patch.

>As an aside (sorry, I keep going off on tangents), can anybody explain why
>writes are twice as expensive in CPU-time as reads? This is not a new

We do writes only via kflushd, in 2.2.x this imply to flush the tlb and
reschedule for each some-kbyte written. In 2.4.x things should be a little
better since we at least don't flush the tlb anymore while switching to
kernel daemons that have NULL tsk->mm.

Andrea



-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:58    [W:0.099 / U:1.692 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site