lkml.org 
[lkml]   [2018]   [Nov]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [RFC PATCH v1 0/7] Block/XFS: Support alternative mirror device retry
Date
On Nov 27, 2018, at 10:49 PM, Darrick J. Wong <darrick.wong@oracle.com> wrote:
> On Wed, Nov 28, 2018 at 04:33:03PM +1100, Dave Chinner wrote:
>> On Tue, Nov 27, 2018 at 08:49:44PM -0700, Allison Henderson wrote:
>>> Motivation:
>>> When fs data/metadata checksum mismatch, lower block devices may have other
>>> correct copies. e.g. If XFS successfully reads a metadata buffer off a raid1
>>> but decides that the metadata is garbage, today it will shut down the entire
>>> filesystem without trying any of the other mirrors. This is a severe
>>> loss of service, and we propose these patches to have XFS try harder to
>>> avoid failure.
>>>
>>> This patch prototype this mirror retry idea by:
>>> * Adding @nr_mirrors to struct request_queue which is similar as
>>> blk_queue_nonrot(), filesystem can grab device request queue and check max
>>> mirrors this block device has.
>>> Helper functions were also added to get/set the nr_mirrors.
>>>
>>> * Expanding bi_write_hint to bi_rw_hint, now @bi_rw_hint has three meanings.
>>> 1.Original write_hint.
>>> 2.end_io() will update @bi_rw_hint to reflect which mirror this i/o really happened.
>>> 3.Fs set @bi_rw_hint to force driver e.g raid1 read from a specific mirror.
>>>
>>> * Modify md/raid1 to support this retry feature.
>>>
>>> * Add b_rw_hint to xfs_buf
>>> This patch adds a new field b_rw_hint to xfs_buf. We will use this to set the
>>> new bio->bi_rw_hint when submitting the read request, and also to store the
>>> returned mirror when the read completes
>
>> the retry iterations. That allows us to let he block layer ot pick
>> whatever leg it wants for the initial read, but if we get a failure
>> we directly control the mirror we retry from and all bios in the
>> buffer go to that same mirror.
>> - is it generic/abstract enough to be able to work with
>> RAID5/6 to trigger verification/recovery from the parity
>> information in the stripe?
>
> In theory we could supply a raid5 implementation, wherein rw_hint == 0
> lets the raid do as it pleases; rw_hint == 1 reads from the stripe; and
> rw_hint == 2 forces stripe recovery for the given block.

Definitely this API needs to be useful for RAID-5/6 storage as well, and
I don't think that needs too complex an interface to achieve.

Basically, the "nr_mirrors" parameter would instead be "nr_retries" or
similar, so that the caller knows how many possible data combinations
there are to try and validate. For mirrors this is easy, and as it is
currently implemented. For RAID-5/6 this would essentially be the
number of data rebuild combinations in the RAID group (e.g. 8 in a
RAID-5 8+1 setup, and 16 in a RAID-6 8+2).

For each call with nr_retries != 0, the MD RAID-5/6 driver would skip
one of the data drives, and rebuild that part of the data from parity.
This wouldn't take too long, since the blocks are already in memory,
they just need the parity to be recomputed in a few different ways to
try and find a combination that returns valid data (e.g. if a drive
failed and the parity also has a latent corrupt sector, not uncommon).

The next step is to have an API that says "retry=N returned the correct
data, rebuild the parity/drive with that combination of devices" so
that the corrupt parity sector isn't used during the rebuild.

Cheers, Andreas





[unhandled content-type:application/pgp-signature]
\
 
 \ /
  Last update: 2018-11-28 20:39    [W:2.429 / U:0.008 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site