lkml.org 
[lkml]   [2020]   [Jun]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 5.4 108/178] nvme-multipath: fix deadlock due to head->lock
    Date
    From: Anton Eidelman <anton@lightbitslabs.com>

    [ Upstream commit d8a22f85609fadb46ba699e0136cc3ebdeebff79 ]

    In the following scenario scan_work and ana_work will deadlock:

    When scan_work calls nvme_mpath_add_disk() this holds ana_lock
    and invokes nvme_parse_ana_log(), which may issue IO
    in device_add_disk() and hang waiting for an accessible path.

    While nvme_mpath_set_live() only called when nvme_state_is_live(),
    a transition may cause NVME_SC_ANA_TRANSITION and requeue the IO.

    Since nvme_mpath_set_live() holds ns->head->lock, an ana_work on
    ANY ctrl will not be able to complete nvme_mpath_set_live()
    on the same ns->head, which is required in order to update
    the new accessible path and remove NVME_NS_ANA_PENDING..
    Therefore IO never completes: deadlock [1].

    Fix:
    Move device_add_disk out of the head->lock and protect it with an
    atomic test_and_set for a new NVME_NS_HEAD_HAS_DISK bit.

    [1]:
    kernel: INFO: task kworker/u8:2:160 blocked for more than 120 seconds.
    kernel: Tainted: G OE 5.3.5-050305-generic #201910071830
    kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    kernel: kworker/u8:2 D 0 160 2 0x80004000
    kernel: Workqueue: nvme-wq nvme_ana_work [nvme_core]
    kernel: Call Trace:
    kernel: __schedule+0x2b9/0x6c0
    kernel: schedule+0x42/0xb0
    kernel: schedule_preempt_disabled+0xe/0x10
    kernel: __mutex_lock.isra.0+0x182/0x4f0
    kernel: __mutex_lock_slowpath+0x13/0x20
    kernel: mutex_lock+0x2e/0x40
    kernel: nvme_update_ns_ana_state+0x22/0x60 [nvme_core]
    kernel: nvme_update_ana_state+0xca/0xe0 [nvme_core]
    kernel: nvme_parse_ana_log+0xa1/0x180 [nvme_core]
    kernel: nvme_read_ana_log+0x76/0x100 [nvme_core]
    kernel: nvme_ana_work+0x15/0x20 [nvme_core]
    kernel: process_one_work+0x1db/0x380
    kernel: worker_thread+0x4d/0x400
    kernel: kthread+0x104/0x140
    kernel: ret_from_fork+0x35/0x40
    kernel: INFO: task kworker/u8:4:439 blocked for more than 120 seconds.
    kernel: Tainted: G OE 5.3.5-050305-generic #201910071830
    kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
    kernel: kworker/u8:4 D 0 439 2 0x80004000
    kernel: Workqueue: nvme-wq nvme_scan_work [nvme_core]
    kernel: Call Trace:
    kernel: __schedule+0x2b9/0x6c0
    kernel: schedule+0x42/0xb0
    kernel: io_schedule+0x16/0x40
    kernel: do_read_cache_page+0x438/0x830
    kernel: read_cache_page+0x12/0x20
    kernel: read_dev_sector+0x27/0xc0
    kernel: read_lba+0xc1/0x220
    kernel: efi_partition+0x1e6/0x708
    kernel: check_partition+0x154/0x244
    kernel: rescan_partitions+0xae/0x280
    kernel: __blkdev_get+0x40f/0x560
    kernel: blkdev_get+0x3d/0x140
    kernel: __device_add_disk+0x388/0x480
    kernel: device_add_disk+0x13/0x20
    kernel: nvme_mpath_set_live+0x119/0x140 [nvme_core]
    kernel: nvme_update_ns_ana_state+0x5c/0x60 [nvme_core]
    kernel: nvme_mpath_add_disk+0xbe/0x100 [nvme_core]
    kernel: nvme_validate_ns+0x396/0x940 [nvme_core]
    kernel: nvme_scan_work+0x256/0x390 [nvme_core]
    kernel: process_one_work+0x1db/0x380
    kernel: worker_thread+0x4d/0x400
    kernel: kthread+0x104/0x140
    kernel: ret_from_fork+0x35/0x40

    Fixes: 0d0b660f214d ("nvme: add ANA support")
    Signed-off-by: Anton Eidelman <anton@lightbitslabs.com>
    Signed-off-by: Sagi Grimberg <sagi@grimberg.me>
    Signed-off-by: Christoph Hellwig <hch@lst.de>
    Signed-off-by: Sasha Levin <sashal@kernel.org>
    ---
    drivers/nvme/host/multipath.c | 4 ++--
    drivers/nvme/host/nvme.h | 2 ++
    2 files changed, 4 insertions(+), 2 deletions(-)

    diff --git a/drivers/nvme/host/multipath.c b/drivers/nvme/host/multipath.c
    index 18f0a05c74b56..574b52e911f08 100644
    --- a/drivers/nvme/host/multipath.c
    +++ b/drivers/nvme/host/multipath.c
    @@ -417,11 +417,11 @@ static void nvme_mpath_set_live(struct nvme_ns *ns)
    if (!head->disk)
    return;

    - mutex_lock(&head->lock);
    - if (!(head->disk->flags & GENHD_FL_UP))
    + if (!test_and_set_bit(NVME_NSHEAD_DISK_LIVE, &head->flags))
    device_add_disk(&head->subsys->dev, head->disk,
    nvme_ns_id_attr_groups);

    + mutex_lock(&head->lock);
    if (nvme_path_is_optimized(ns)) {
    int node, srcu_idx;

    diff --git a/drivers/nvme/host/nvme.h b/drivers/nvme/host/nvme.h
    index 22e8401352c22..ed02260862cb5 100644
    --- a/drivers/nvme/host/nvme.h
    +++ b/drivers/nvme/host/nvme.h
    @@ -345,6 +345,8 @@ struct nvme_ns_head {
    spinlock_t requeue_lock;
    struct work_struct requeue_work;
    struct mutex lock;
    + unsigned long flags;
    +#define NVME_NSHEAD_DISK_LIVE 0
    struct nvme_ns __rcu *current_path[];
    #endif
    };
    --
    2.25.1
    \
     
     \ /
      Last update: 2020-06-29 21:48    [W:4.794 / U:0.216 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site