lkml.org 
[lkml]   [1998]   [Nov]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: elevator algorithm bug in ll_rw_blk.c
Hi,

On 17 Nov 1998 17:28:03 +0800, "Michael O'Reilly"
<michael@metal.iinet.net.au> said:

> In this case, the write performance frequently goes abysmal, even tho
> it's sequential writing. 'Abysmal' to the tune on 800K/sec on a 6 disk
> array.

> (the actual performance case I saw this frequently on was a squid-1.1
> server shutting down. It would write 6 x 200Meg file, one to each of
> the 6 disks).

> I'm still not sure exactly what was happening, but the write
> performance would start of in the 4 or 5 meg/sec for the first 10 or
> so seconds, then drop to 800K/sec for the remainder of the
> data.

I suspect that in this sort of situation we are being hit by two
separate known problems:

1) We only have a single request queue shared by all block devices.

2) Parallel sync()s interfere badly with each other.

If you have lots of large writers, then as those writers compete for
buffer cache space, they will all start synching each other's buffers to
disk. I've been toying with the idea of just stomping on this problem
totally by imposing a strict limit on the amount of dirty data we allow
for any given disk (or maybe per process). If we avoid the buffer-cache
thrashing threshold, then we can just let bdflush do its normal job of
writeback and we have a single sync thread which is not interfered with
by the rest of the system.

The cost of course is reduced concurrency.

Any thoughts?

--Stephen

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:45    [W:0.119 / U:0.556 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site