lkml.org 
[lkml]   [2019]   [May]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 3.18 75/86] Dont jump to compute_result state from check_result state
    Date
    From: Nigel Croxon <ncroxon@redhat.com>

    commit 4f4fd7c5798bbdd5a03a60f6269cf1177fbd11ef upstream.

    Changing state from check_state_check_result to
    check_state_compute_result not only is unsafe but also doesn't
    appear to serve a valid purpose. A raid6 check should only be
    pushing out extra writes if doing repair and a mis-match occurs.
    The stripe dev management will already try and do repair writes
    for failing sectors.

    This patch makes the raid6 check_state_check_result handling
    work more like raid5's. If somehow too many failures for a
    check, just quit the check operation for the stripe. When any
    checks pass, don't try and use check_state_compute_result for
    a purpose it isn't needed for and is unsafe for. Just mark the
    stripe as in sync for passing its parity checks and let the
    stripe dev read/write code and the bad blocks list do their
    job handling I/O errors.

    Repro steps from Xiao:

    These are the steps to reproduce this problem:
    1. redefined OPT_MEDIUM_ERR_ADDR to 12000 in scsi_debug.c
    2. insmod scsi_debug.ko dev_size_mb=11000 max_luns=1 num_tgts=1
    3. mdadm --create /dev/md127 --level=6 --raid-devices=5 /dev/sde1 /dev/sde2 /dev/sde3 /dev/sde5 /dev/sde6
    sde is the disk created by scsi_debug
    4. echo "2" >/sys/module/scsi_debug/parameters/opts
    5. raid-check

    It panic:
    [ 4854.730899] md: data-check of RAID array md127
    [ 4854.857455] sd 5:0:0:0: [sdr] tag#80 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
    [ 4854.859246] sd 5:0:0:0: [sdr] tag#80 Sense Key : Medium Error [current]
    [ 4854.860694] sd 5:0:0:0: [sdr] tag#80 Add. Sense: Unrecovered read error
    [ 4854.862207] sd 5:0:0:0: [sdr] tag#80 CDB: Read(10) 28 00 00 00 2d 88 00 04 00 00
    [ 4854.864196] print_req_error: critical medium error, dev sdr, sector 11656 flags 0
    [ 4854.867409] sd 5:0:0:0: [sdr] tag#100 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
    [ 4854.869469] sd 5:0:0:0: [sdr] tag#100 Sense Key : Medium Error [current]
    [ 4854.871206] sd 5:0:0:0: [sdr] tag#100 Add. Sense: Unrecovered read error
    [ 4854.872858] sd 5:0:0:0: [sdr] tag#100 CDB: Read(10) 28 00 00 00 2e e0 00 00 08 00
    [ 4854.874587] print_req_error: critical medium error, dev sdr, sector 12000 flags 4000
    [ 4854.876456] sd 5:0:0:0: [sdr] tag#101 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
    [ 4854.878552] sd 5:0:0:0: [sdr] tag#101 Sense Key : Medium Error [current]
    [ 4854.880278] sd 5:0:0:0: [sdr] tag#101 Add. Sense: Unrecovered read error
    [ 4854.881846] sd 5:0:0:0: [sdr] tag#101 CDB: Read(10) 28 00 00 00 2e e8 00 00 08 00
    [ 4854.883691] print_req_error: critical medium error, dev sdr, sector 12008 flags 4000
    [ 4854.893927] sd 5:0:0:0: [sdr] tag#166 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
    [ 4854.896002] sd 5:0:0:0: [sdr] tag#166 Sense Key : Medium Error [current]
    [ 4854.897561] sd 5:0:0:0: [sdr] tag#166 Add. Sense: Unrecovered read error
    [ 4854.899110] sd 5:0:0:0: [sdr] tag#166 CDB: Read(10) 28 00 00 00 2e e0 00 00 10 00
    [ 4854.900989] print_req_error: critical medium error, dev sdr, sector 12000 flags 0
    [ 4854.902757] md/raid:md127: read error NOT corrected!! (sector 9952 on sdr1).
    [ 4854.904375] md/raid:md127: read error NOT corrected!! (sector 9960 on sdr1).
    [ 4854.906201] ------------[ cut here ]------------
    [ 4854.907341] kernel BUG at drivers/md/raid5.c:4190!

    raid5.c:4190 above is this BUG_ON:

    handle_parity_checks6()
    ...
    BUG_ON(s->uptodate < disks - 1); /* We don't need Q to recover */

    Cc: <stable@vger.kernel.org> # v3.16+
    OriginalAuthor: David Jeffery <djeffery@redhat.com>
    Cc: Xiao Ni <xni@redhat.com>
    Tested-by: David Jeffery <djeffery@redhat.com>
    Signed-off-by: David Jeffy <djeffery@redhat.com>
    Signed-off-by: Nigel Croxon <ncroxon@redhat.com>
    Signed-off-by: Song Liu <songliubraving@fb.com>
    Signed-off-by: Jens Axboe <axboe@kernel.dk>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

    ---
    drivers/md/raid5.c | 19 ++++---------------
    1 file changed, 4 insertions(+), 15 deletions(-)

    --- a/drivers/md/raid5.c
    +++ b/drivers/md/raid5.c
    @@ -3414,26 +3414,15 @@ static void handle_parity_checks6(struct
    case check_state_check_result:
    sh->check_state = check_state_idle;

    + if (s->failed > 1)
    + break;
    /* handle a successful check operation, if parity is correct
    * we are done. Otherwise update the mismatch count and repair
    * parity if !MD_RECOVERY_CHECK
    */
    if (sh->ops.zero_sum_result == 0) {
    - /* both parities are correct */
    - if (!s->failed)
    - set_bit(STRIPE_INSYNC, &sh->state);
    - else {
    - /* in contrast to the raid5 case we can validate
    - * parity, but still have a failure to write
    - * back
    - */
    - sh->check_state = check_state_compute_result;
    - /* Returning at this point means that we may go
    - * off and bring p and/or q uptodate again so
    - * we make sure to check zero_sum_result again
    - * to verify if p or q need writeback
    - */
    - }
    + /* Any parity checked was correct */
    + set_bit(STRIPE_INSYNC, &sh->state);
    } else {
    atomic64_add(STRIPE_SECTORS, &conf->mddev->resync_mismatches);
    if (test_bit(MD_RECOVERY_CHECK, &conf->mddev->recovery))

    \
     
     \ /
      Last update: 2019-05-15 13:01    [W:4.034 / U:0.396 seconds]
    ©2003-2018 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site