lkml.org 
[lkml]   [2021]   [May]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH] nvme-multipath: Reset bi_disk to ns head when failover
From
Date
On 5/3/21 2:57 PM, Daniel Wagner wrote:
> The path can be stale when we failover. If we don't reset the bdev to
> the ns head and the I/O finally completes in end_io() it will triggers
> a crash. By resetting the to ns head disk so that the submit path can
> map the request to an active path.
>
> Signed-off-by: Daniel Wagner <dwagner@suse.de>
> ---
>
> The patch is against nvme-5.13.
>
> [ 6552.155244] Call Trace:
> [ 6552.155251] bio_endio+0x74/0x120
> [ 6552.155260] nvme_ns_head_submit_bio+0x36f/0x3e0 [nvme_core]
> [ 6552.155266] ? __switch_to_asm+0x34/0x70
> [ 6552.155269] ? __switch_to_asm+0x40/0x70
> [ 6552.155271] submit_bio_noacct+0x175/0x490
> [ 6552.155274] ? __switch_to_asm+0x34/0x70
> [ 6552.155277] ? __switch_to_asm+0x34/0x70
> [ 6552.155284] ? nvme_requeue_work+0x5a/0x70 [nvme_core]
> [ 6552.155290] nvme_requeue_work+0x5a/0x70 [nvme_core]
> [ 6552.155296] process_one_work+0x1f4/0x3e0
> [ 6552.155299] worker_thread+0x2d/0x3e0
> [ 6552.155302] ? process_one_work+0x3e0/0x3e0
> [ 6552.155305] kthread+0x10d/0x130
> [ 6552.155307] ? kthread_park+0xa0/0xa0
> [ 6552.155311] ret_from_fork+0x35/0x40
>
> drivers/nvme/host/multipath.c | 6 ++++++
> 1 file changed, 6 insertions(+)
>
> diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
> index 0d0de3433f37..0faf267faa58 100644
> --- a/drivers/nvme/host/multipath.c
> +++ b/drivers/nvme/host/multipath.c
> @@ -69,7 +69,9 @@ void nvme_failover_req(struct request *req)
> {
> struct nvme_ns *ns = req->q->queuedata;
> u16 status = nvme_req(req)->status & 0x7ff;
> + struct block_device *bdev;
> unsigned long flags;
> + struct bio *bio;
>
> nvme_mpath_clear_current_path(ns);
>
> @@ -83,9 +85,13 @@ void nvme_failover_req(struct request *req)
> queue_work(nvme_wq, &ns->ctrl->ana_work);
> }
>
> + bdev = bdget_disk(ns->head->disk, 0);
> spin_lock_irqsave(&ns->head->requeue_lock, flags);
> + for (bio = req->bio; bio; bio = bio->bi_next)
> + bio_set_dev(bio, bdev);
> blk_steal_bios(&ns->head->requeue_list, req);
> spin_unlock_irqrestore(&ns->head->requeue_lock, flags);
> + bdput(bdev);
>
> blk_mq_end_request(req, 0);
> kblockd_schedule_work(&ns->head->requeue_work);
>
Maybe a WARN_ON(!bdev) after bdget_disk(), but otherwise:

Reviewed-by: Hannes Reinecke <hare@suse.de>

Cheers,

Hannes
--
Dr. Hannes Reinecke Kernel Storage Architect
hare@suse.de +49 911 74053 688
SUSE Software Solutions GmbH, Maxfeldstr. 5, 90409 Nürnberg
HRB 36809 (AG Nürnberg), Geschäftsführer: Felix Imendörffer

\
 
 \ /
  Last update: 2021-05-03 15:34    [W:0.050 / U:0.652 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site