lkml.org 
[lkml]   [2017]   [Jun]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH V2] nvme: fix nvme_remove going to uninterruptible sleep for ever
On Thu, Jun 01, 2017 at 02:46:32PM +0200, Christoph Hellwig wrote:
> On Thu, Jun 01, 2017 at 03:36:50PM +0300, Rakesh Pandit wrote:
> > Also Sagi pointed out that user space set_features ioctl if fired up
> > in a window after nvme_removal it can also result in this issue seems
> > to be correct. I would prefer to keep this as it is and introduce
> > similar check higher up in nvme_ioctrl instead so that we don't send
> > sync commands if queues are killed already.
> >
> > Would you prefer a patch ? Thanks,
>
> If we want to kill everyone we probably should do it in ->queue_rq.

Looks ->queue_rq has done it already via checking nvmeq->cq_vector

> Or is the block layer blocking you somewhere else?

blk-mq doesn't handle dying in the I/O path.

Maybe it is similar with 806f026f9b901eaf1a(nvme: use blk_mq_start_hw_queues() in
nvme_kill_queues()), seems we need to do it for admin_q too.

Can the following change fix the issue?

diff --git a/drivers/nvme/host/core.c b/drivers/nvme/host/core.c
index e44326d5cf19..360758488124 100644
--- a/drivers/nvme/host/core.c
+++ b/drivers/nvme/host/core.c
@@ -2438,6 +2438,7 @@ void nvme_kill_queues(struct nvme_ctrl *ctrl)
struct nvme_ns *ns;

mutex_lock(&ctrl->namespaces_mutex);
+ blk_mq_start_hw_queues(ctrl->admin_q);
list_for_each_entry(ns, &ctrl->namespaces, list) {
/*
* Revalidating a dead namespace sets capacity to 0. This will

Thanks,
Ming

\
 
 \ /
  Last update: 2017-06-01 16:57    [W:0.091 / U:0.460 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site