Messages in this thread Patch in this message | | | From | Jiri Slaby <> | Subject | [PATCH 3.12 16/82] md/raid1: fix test for 'was read error from last working device'. | Date | Mon, 24 Aug 2015 11:08:36 +0200 |
| |
From: NeilBrown <neilb@suse.com>
3.12-stable review patch. If anyone has any objections, please let me know.
===============
commit 34cab6f42003cb06f48f86a86652984dec338ae9 upstream.
When we get a read error from the last working device, we don't try to repair it, and don't fail the device. We simple report a read error to the caller.
However the current test for 'is this the last working device' is wrong. When there is only one fully working device, it assumes that a non-faulty device is that device. However a spare which is rebuilding would be non-faulty but so not the only working device.
So change the test from "!Faulty" to "In_sync". If ->degraded says there is only one fully working device and this device is in_sync, this must be the one.
This bug has existed since we allowed read_balance to read from a recovering spare in v3.0
Reported-and-tested-by: Alexander Lyakas <alex.bolshoy@gmail.com> Fixes: 76073054c95b ("md/raid1: clean up read_balance.") Signed-off-by: NeilBrown <neilb@suse.com> Signed-off-by: Jiri Slaby <jslaby@suse.cz> --- drivers/md/raid1.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index 633b6e1e7d4d..14934ac112c3 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -327,7 +327,7 @@ static void raid1_end_read_request(struct bio *bio, int error) spin_lock_irqsave(&conf->device_lock, flags); if (r1_bio->mddev->degraded == conf->raid_disks || (r1_bio->mddev->degraded == conf->raid_disks-1 && - !test_bit(Faulty, &conf->mirrors[mirror].rdev->flags))) + test_bit(In_sync, &conf->mirrors[mirror].rdev->flags))) uptodate = 1; spin_unlock_irqrestore(&conf->device_lock, flags); } -- 2.5.0
| |