Messages in this thread Patch in this message |  | | From | Mustafa Mesanovic <> | Subject | [RFC][PATCH] dm: improve read performance | Date | Mon, 27 Dec 2010 12:19:55 +0100 |
| |
From: Mustafa Mesanovic <mume@linux.vnet.ibm.com>
A short explanation in prior: in this case we have "stacked" dm devices. Two multipathed luns combined together to one striped logical volume.
I/O throughput degradation happens at __bio_add_page when bio's get checked upon max_sectors. In this setup max_sectors is always set to 8 -> what is 4KiB. A standalone striped logical volume on luns which are not multipathed do not have the problem: the logical volume will take over the max_sectors from luns below.
Same happens with luns which are multipathed -> the multipathed targets have the same max_sectors as the luns below.
So "magic" happens only when target has no own merge_fn and below lying devices have a merge function -> we got then max_sectors=PAGE_SIZE >> 9. This patch prevents that max_sectors will be set to PAGE_SIZE >> 9. Instead it will use the minimum max_sectors value from below devices.
Using the patch improves read I/O up to 3x. In this specific case from 600MiB/s up to 1800MiB/s.
Signed-off-by: Mustafa Mesanovic <mume@linux.vnet.ibm.com> ---
dm-table.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
Index: linux-2.6/drivers/md/dm-table.c =================================================================== --- linux-2.6.orig/drivers/md/dm-table.c 2010-12-23 13:49:18.000000000 +0100 +++ linux-2.6/drivers/md/dm-table.c 2010-12-23 13:50:22.000000000 +0100 @@ -518,7 +518,7 @@ if (q->merge_bvec_fn && !ti->type->merge) blk_limits_max_hw_sectors(limits, - (unsigned int) (PAGE_SIZE >> 9)); + q->limits.max_sectors); return 0; } EXPORT_SYMBOL_GPL(dm_set_device_limits);
|  |