lkml.org 
[lkml]   [2010]   [Dec]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    Date
    From
    Subject[037/127] md/raid1: really fix recovery looping when single good device fails.
    2.6.32-stable review patch.  If anyone has any objections, please let us know.

    ------------------

    From: NeilBrown <neilb@suse.de>

    commit 8f9e0ee38f75d4740daa9e42c8af628d33d19a02 upstream.

    Commit 4044ba58dd15cb01797c4fd034f39ef4a75f7cc3 supposedly fixed a
    problem where if a raid1 with just one good device gets a read-error
    during recovery, the recovery would abort and immediately restart in
    an infinite loop.

    However it depended on raid1_remove_disk removing the spare device
    from the array. But that does not happen in this case. So add a test
    so that in the 'recovery_disabled' case, the device will be removed.

    This suitable for any kernel since 2.6.29 which is when
    recovery_disabled was introduced.

    Reported-by: Sebastian Färber <faerber@gmail.com>
    Signed-off-by: NeilBrown <neilb@suse.de>
    Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>

    ---
    drivers/md/raid1.c | 1 +
    1 file changed, 1 insertion(+)

    --- a/drivers/md/raid1.c
    +++ b/drivers/md/raid1.c
    @@ -1188,6 +1188,7 @@ static int raid1_remove_disk(mddev_t *md
    * is not possible.
    */
    if (!test_bit(Faulty, &rdev->flags) &&
    + !mddev->recovery_disabled &&
    mddev->degraded < conf->raid_disks) {
    err = -EBUSY;
    goto abort;

    --
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

    \
     
     \ /
      Last update: 2010-12-08 03:17    [W:5.209 / U:0.060 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site