lkml.org 
[lkml]   [2015]   [Jul]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH v3 1/7] blk-mq: avoid access hctx->tags->cpumask before allocation
Date
When unmapped hw queue is remapped after CPU topology is changed,
hctx->tags->cpumask is set before hctx->tags is allocated in
blk_mq_map_swqueue().

In order to fix this null pointer dereference, hctx->tags must be
allocated before configuring hctx->tags->cpumask.

Fixes: f26cdc8536 ("blk-mq: Shared tag enhancements")
Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
Cc: Keith Busch <keith.busch@intel.com>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: Ming Lei <tom.leiming@gmail.com>
---
block/blk-mq.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/block/blk-mq.c b/block/blk-mq.c
index 7d842db..f29f766 100644
--- a/block/blk-mq.c
+++ b/block/blk-mq.c
@@ -1821,7 +1821,6 @@ static void blk_mq_map_swqueue(struct request_queue *q)

hctx = q->mq_ops->map_queue(q, i);
cpumask_set_cpu(i, hctx->cpumask);
- cpumask_set_cpu(i, hctx->tags->cpumask);
ctx->index_hw = hctx->nr_ctx;
hctx->ctxs[hctx->nr_ctx++] = ctx;
}
@@ -1861,6 +1860,14 @@ static void blk_mq_map_swqueue(struct request_queue *q)
hctx->next_cpu = cpumask_first(hctx->cpumask);
hctx->next_cpu_batch = BLK_MQ_CPU_WORK_BATCH;
}
+
+ queue_for_each_ctx(q, ctx, i) {
+ if (!cpu_online(i))
+ continue;
+
+ hctx = q->mq_ops->map_queue(q, i);
+ cpumask_set_cpu(i, hctx->tags->cpumask);
+ }
}

static void blk_mq_update_tag_set_depth(struct blk_mq_tag_set *set)
--
1.9.1


\
 
 \ /
  Last update: 2015-07-18 18:41    [W:0.124 / U:0.916 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site