lkml.org 
[lkml]   [2018]   [Apr]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: usercopy whitelist woe in scsi_sense_cache
From
Date
On 4/17/18 5:06 PM, Kees Cook wrote:
> On Tue, Apr 17, 2018 at 3:57 PM, Jens Axboe <axboe@kernel.dk> wrote:
>> On 4/17/18 3:48 PM, Jens Axboe wrote:
>>> On 4/17/18 3:47 PM, Kees Cook wrote:
>>>> On Tue, Apr 17, 2018 at 2:39 PM, Jens Axboe <axboe@kernel.dk> wrote:
>>>>> On 4/17/18 3:25 PM, Kees Cook wrote:
>>>>>> On Tue, Apr 17, 2018 at 1:46 PM, Kees Cook <keescook@chromium.org> wrote:
>>>>>>> I see elv.priv[1] assignments made in a few places -- is it possible
>>>>>>> there is some kind of uninitialized-but-not-NULL state that can leak
>>>>>>> in there?
>>>>>>
>>>>>> Got it. This fixes it for me:
>>>>>>
>>>>>> diff --git a/block/blk-mq.c b/block/blk-mq.c
>>>>>> index 0dc9e341c2a7..859df3160303 100644
>>>>>> --- a/block/blk-mq.c
>>>>>> +++ b/block/blk-mq.c
>>>>>> @@ -363,7 +363,7 @@ static struct request *blk_mq_get_request(struct
>>>>>> request_queue *q,
>>>>>>
>>>>>> rq = blk_mq_rq_ctx_init(data, tag, op);
>>>>>> if (!op_is_flush(op)) {
>>>>>> - rq->elv.icq = NULL;
>>>>>> + memset(&rq->elv, 0, sizeof(rq->elv));
>>>>>> if (e && e->type->ops.mq.prepare_request) {
>>>>>> if (e->type->icq_cache && rq_ioc(bio))
>>>>>> blk_mq_sched_assign_ioc(rq, bio);
>>>>>> @@ -461,7 +461,7 @@ void blk_mq_free_request(struct request *rq)
>>>>>> e->type->ops.mq.finish_request(rq);
>>>>>> if (rq->elv.icq) {
>>>>>> put_io_context(rq->elv.icq->ioc);
>>>>>> - rq->elv.icq = NULL;
>>>>>> + memset(&rq->elv, 0, sizeof(rq->elv));
>>>>>> }
>>>>>> }
>>>>>
>>>>> This looks like a BFQ problem, this should not be necessary. Paolo,
>>>>> you're calling your own prepare request handler from the insert
>>>>> as well, and your prepare request does nothing if rq->elv.icq == NULL.
>>>>
>>>> I sent the patch anyway, since it's kind of a robustness improvement,
>>>> I'd hope. If you fix BFQ also, please add:
>>>
>>> It's also a memset() in the hot path, would prefer to avoid that...
>>> The issue here is really the convoluted bfq usage of insert/prepare,
>>> I'm sure Paolo can take it from here.
>>
>> Does this fix it?
>>
>> diff --git a/block/bfq-iosched.c b/block/bfq-iosched.c
>> index f0ecd98509d8..d883469a1582 100644
>> --- a/block/bfq-iosched.c
>> +++ b/block/bfq-iosched.c
>> @@ -4934,8 +4934,11 @@ static void bfq_prepare_request(struct request *rq, struct bio *bio)
>> bool new_queue = false;
>> bool bfqq_already_existing = false, split = false;
>>
>> - if (!rq->elv.icq)
>> + if (!rq->elv.icq) {
>> + rq->elv.priv[0] = rq->elv.priv[1] = NULL;
>> return;
>> + }
>> +
>> bic = icq_to_bic(rq->elv.icq);
>>
>> spin_lock_irq(&bfqd->lock);
>
> It does! Excellent. :)

Sweet! I'll add a comment and queue it up for 4.17 and mark for stable, with
your annotations too.

--
Jens Axboe

\
 
 \ /
  Last update: 2018-04-18 01:13    [W:1.307 / U:0.004 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site