lkml.org 
[lkml]   [2015]   [Jul]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH v3 1/7] blk-mq: avoid access hctx->tags->cpumask before allocation
From
On Tue, Jul 21, 2015 at 9:58 AM, Akinobu Mita <akinobu.mita@gmail.com> wrote:
> 2015-07-19 (日) の 18:24 +0800 に Ming Lei さんは書きました:
>> On Sun, Jul 19, 2015 at 12:28 AM, Akinobu Mita <akinobu.mita@gmail.com> wrote:
>> > When unmapped hw queue is remapped after CPU topology is changed,
>> > hctx->tags->cpumask is set before hctx->tags is allocated in
>> > blk_mq_map_swqueue().
>> >
>> > In order to fix this null pointer dereference, hctx->tags must be
>> > allocated before configuring hctx->tags->cpumask.
>>
>> The root cause should be that the mapping between hctx and ctx
>> can be changed after CPU topo is changed, then hctx->tags can
>> be changed too, so hctx->tags->cpumask has to be set after
>> hctx->tags is setup.
>>
>> >
>> > Fixes: f26cdc8536 ("blk-mq: Shared tag enhancements")
>>
>> I am wondering if the above commit considers CPU hotplug, and
>> nvme uses tag->cpumask to set irq affinity hint just during
>> starting queue. Looks it should be reasonalbe to
>> introduce one callback of mapping_changed() for handling
>> this kind of stuff. But this isn't related with this patch.
>>
>> > Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com>
>> > Cc: Keith Busch <keith.busch@intel.com>
>> > Cc: Jens Axboe <axboe@kernel.dk>
>> > Cc: Ming Lei <tom.leiming@gmail.com>
>> > ---
>> > block/blk-mq.c | 9 ++++++++-
>> > 1 file changed, 8 insertions(+), 1 deletion(-)
>> >
>> > diff --git a/block/blk-mq.c b/block/blk-mq.c
>> > index 7d842db..f29f766 100644
>> > --- a/block/blk-mq.c
>> > +++ b/block/blk-mq.c
>> > @@ -1821,7 +1821,6 @@ static void blk_mq_map_swqueue(struct request_queue *q)
>> >
>> > hctx = q->mq_ops->map_queue(q, i);
>> > cpumask_set_cpu(i, hctx->cpumask);
>> > - cpumask_set_cpu(i, hctx->tags->cpumask);
>> > ctx->index_hw = hctx->nr_ctx;
>> > hctx->ctxs[hctx->nr_ctx++] = ctx;
>> > }
>> > @@ -1861,6 +1860,14 @@ static void blk_mq_map_swqueue(struct request_queue *q)
>> > hctx->next_cpu = cpumask_first(hctx->cpumask);
>> > hctx->next_cpu_batch = BLK_MQ_CPU_WORK_BATCH;
>> > }
>> > +
>> > + queue_for_each_ctx(q, ctx, i) {
>> > + if (!cpu_online(i))
>> > + continue;
>> > +
>> > + hctx = q->mq_ops->map_queue(q, i);
>> > + cpumask_set_cpu(i, hctx->tags->cpumask);
>>
>> If tags->cpumask is always same with hctx->cpumaks, this
>> CPU iterator can be avoided.
>
> How about this patch?
> Or should we use cpumask_or() instead cpumask_copy?

I guess tags->cpumask need to be fixed in future, so better to
just take the current patch:

[PATCH v3 1/7] blk-mq: avoid access hctx->tags->cpumask before allocation

Thanks,

>
> diff --git a/block/blk-mq.c b/block/blk-mq.c
> index 7d842db..56f814a 100644
> --- a/block/blk-mq.c
> +++ b/block/blk-mq.c
> @@ -1821,7 +1821,6 @@ static void blk_mq_map_swqueue(struct request_queue *q)
>
> hctx = q->mq_ops->map_queue(q, i);
> cpumask_set_cpu(i, hctx->cpumask);
> - cpumask_set_cpu(i, hctx->tags->cpumask);
> ctx->index_hw = hctx->nr_ctx;
> hctx->ctxs[hctx->nr_ctx++] = ctx;
> }
> @@ -1846,7 +1845,10 @@ static void blk_mq_map_swqueue(struct request_queue *q)
> if (!set->tags[i])
> set->tags[i] = blk_mq_init_rq_map(set, i);
> hctx->tags = set->tags[i];
> - WARN_ON(!hctx->tags);
> + if (hctx->tags)
> + cpumask_copy(hctx->tags->cpumask, hctx->cpumask);
> + else
> + WARN_ON(1);
>
> /*
> * Set the map size to the number of mapped software queues.
>
>



--
Ming Lei


\
 
 \ /
  Last update: 2015-07-27 12:21    [W:0.056 / U:0.584 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site