| From | Greg Kroah-Hartman <> | Subject | [PATCH 4.8 67/85] dm rq: fix a race condition in rq_completed() | Date | Wed, 4 Jan 2017 21:47:52 +0100 |
| |
4.8-stable review patch. If anyone has any objections, please let me know.
------------------
From: Bart Van Assche <bart.vanassche@sandisk.com>
commit d15bb3a6467e102e60d954aadda5fb19ce6fd8ec upstream.
It is required to hold the queue lock when calling blk_run_queue_async() to avoid that a race between blk_run_queue_async() and blk_cleanup_queue() is triggered.
Signed-off-by: Bart Van Assche <bart.vanassche@sandisk.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
--- drivers/md/dm-rq.c | 10 ++++++++-- 1 file changed, 8 insertions(+), 2 deletions(-)
--- a/drivers/md/dm-rq.c +++ b/drivers/md/dm-rq.c @@ -218,6 +218,9 @@ static void rq_end_stats(struct mapped_d */ static void rq_completed(struct mapped_device *md, int rw, bool run_queue) { + struct request_queue *q = md->queue; + unsigned long flags; + atomic_dec(&md->pending[rw]); /* nudge anyone waiting on suspend queue */ @@ -230,8 +233,11 @@ static void rq_completed(struct mapped_d * back into ->request_fn() could deadlock attempting to grab the * queue lock again. */ - if (!md->queue->mq_ops && run_queue) - blk_run_queue_async(md->queue); + if (!q->mq_ops && run_queue) { + spin_lock_irqsave(q->queue_lock, flags); + blk_run_queue_async(q); + spin_unlock_irqrestore(q->queue_lock, flags); + } /* * dm_put() must be at the end of this function. See the comment above
|