Messages in this thread |  | | Date | Thu, 7 Sep 2000 15:58:15 -0300 (BRST) | From | Rik van Riel <> | Subject | Re: Weirdness in block device queues. |
| |
On Thu, 7 Sep 2000, Eric Youngdale wrote:
> The oddness is this. We were observing stalls in the > processing of commands that was traced to the fact that the > queue had remained plugged for an excessive amount of time. > The stalls last for about 5 seconds or so.
Which is the default sleep interval for kupdate, which calls run_task_queue(&tq_disk) inside flush_old_buffers().
> Some investigation revealed that part of the answer is that > the bdflush daemon essentially forces a bunch of dirty pages to > be written to disk, but never bothers to unplug the queue when > it is done.
Heh, you noticed this too? ;)
> My gut tells me that it is wrong for bdflush to not unplug > the queue when it is done queueing I/O requests.
It is. Not unplugging the queue results in higher throughput when running a benchmark load, but seems to really harm system throughput (and cause stalls) in /real/ loads.
This is most likely due to the fact that in most real life loads we have to write data and metadata all over the place and write clustering isn't as effective as in benchmark loads (where we write 500MB of "data" at full speed to one file).
What about a "benchmark" value in /proc/sys/vm/bdflush to remind the rest of the world of why we want to unplug the device queue in bdflush() ? ;)
regards,
Rik -- "What you're running that piece of shit Gnome?!?!" -- Miguel de Icaza, UKUUG 2000
http://www.conectiva.com/ http://www.surriel.com/
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org Please read the FAQ at http://www.tux.org/lkml/
|  |