lkml.org 
[lkml]   [2009]   [Oct]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Subject[patch v4 0/3] aio: implement request batching
Date
Hi,

In this, the 4th iteration of the patch set, I've addressed the concerns
of Jens and Zach as follows:

- In the O_DIRECT write path, we get rid of WRITE_ODIRECT in favor of
WRITE_SYNC_PLUG
- request batching is only done for AIO operations that could
potentially benefit from it
- Initialization of the hash table is done using the gcc array
initialization syntax

Further, I got rid of a compiler warning that I had introduced in the
last patch set. I'll continue to test this under a variety of loads,
but would certainly appreciate it if others could give it a spin.

Patch overview / reason for existence:

Some workloads issue batches of small I/Os, and the performance is poor
due to the call to blk_run_address_space for every single iocb. Nathan
Roberts pointed this out, and suggested that by deferring this call
until all I/Os in the iocb array are submitted to the block layer, we
can realize some impressive performance gains.

For example, running against a simple SATA disk driving a queue depth of
128, we can improve the performance of O_DIRECT writes twofold for 4k
I/Os, and similarly impressive numbers for other I/O sizes:
http://people.redhat.com/jmoyer/dbras/vanilla-vs-v4/metallica-single-sata/noop/io-depth-128-write-bw.png

For read workloads on somewhat faster storage, we see similar benefits
for batches of smaller I/Os as well:
http://people.redhat.com/jmoyer/dbras/vanilla-vs-v4/thor-4-disk-lvm-stripe/noop/io-depth-16-read-bw.png
That's 230+MB/s for vanilla 4k reads, batches of 16 vs. 410MB/s for
patched. When there are multiple threads competing for the disk, I
haven't witnessed such crazy numbers, however.

Cheers,
Jeff


\
 
 \ /
  Last update: 2009-10-03 00:57    [from the cache]
©2003-2011 Jasper Spaans