lkml.org 
[lkml]   [2008]   [Feb]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: device mapper not reporting no-barrier-support?
On Mon, 25 Feb 2008 14:26:15 +0100 Anders Henke <anders.henke@1und1.de> wrote:

> Hi,
>
> I'm currently stuck between Kernel LVM and DRBD, as I'm using Kernel
> 2.6.24.2 with DRBD 8.2.5 on top of an LVM2 device (LV).
>
> -LVM2/device mapper doesn't support write barriers
> -DRBD uses blkdev_issue_flush() to flush its metadata to disk.
> On a no-barrier-device, DRBD should receive EOPNOTSUPP, but
> it really does receive an EIO. Promptly, DRBD gives the
> error message "drbd0: local disk flush failed with status -5".
>
> The physical disk (in LVM speak) is a RAID1 on a 3ware 9650SE-2LP
> controller; the driver 3w-9xxx supports barriers and after moving my D
> RBD device from the LV to a single partition on the same RAID1, the
> error messages from DRBD vanished.
>
> I've posted a lengty summary of my findings to
>
> http://lists.linbit.com/pipermail/drbd-user/2008-February/008665.html
>
> ... where Lars Ellenberg from DRBD basically responded in
>
> http://lists.linbit.com/pipermail/drbd-user/2008-February/008666.html
>
> ... that DRBD does catch the EOPNOTSUPP for blkdev_issue_flush and
> BIO_RW_BARRIER, but the lvm implementation of blkdev_issue_flush in
> 2.6.24.2 aparently does return EIO for blkdev_issue_flush.
>
> So simply the question: how should a top-layer driver check wether a lower
> device does support barriers? md-raid does check this way differently than
> e.g. XFS does, while DRBD also adds a third way to check this.
> Or is this "merely" a bug in drivers/md/dm.c?
>

(cc dm-devel)

I'd say it's a DM bug. Probably a hard-to-fix one though.


\
 
 \ /
  Last update: 2008-02-26 00:25    [W:0.066 / U:0.560 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site