lkml.org 
[lkml]   [2021]   [Apr]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: KASAN: null-ptr-deref Write in io_uring_cancel_sqpoll
From
Date
On 4/27/21 11:39 AM, Palash Oswal wrote:
> On Tue, Apr 27, 2021 at 2:07 PM Pavel Begunkov <asml.silence@gmail.com> wrote:
>>
>> io_sq_offload_create() {
>> ...
>> ret = io_uring_alloc_task_context(tsk, ctx);
>> wake_up_new_task(tsk);
>> if (ret)
>> goto err;
>> }
>>
>> Shouldn't happen unless offload create has failed. Just add
>> a return in *cancel_sqpoll() for this case. It's failing
>> so no requests has been submitted and no cancellation is needed.
>
> io_uring_cancel_sqpoll can be called by two flows:
> 1. io_uring_task_cancel() -> io_sqpoll_cancel_sync() ->
> io_uring_cancel_sqpoll ; which properly sanitises current->io_uring
> to be non NULL. (
> https://elixir.bootlin.com/linux/v5.12/source/include/linux/io_uring.h#L21
> )
> 2. io_sq_offload_create -> io_sq_thread -> io_uring_cancel_sqpoll ;
> which does not check the value of current->io_uring
>
> In the second flow,
> https://elixir.bootlin.com/linux/v5.12/source/fs/io_uring.c#L7970
> The initialization of current->io_uring (i.e
> io_uring_alloc_task_context() ) happens after calling io_sq_thread.
> And, therefore io_uring_cancel_sqpoll receives a NULL value for
> current->io_uring.

Right, exactly as I wrote in the previous message. And still placing
the check in io_uring_cancel_sqpoll() is a better (safer) option.

Just send a patch for 5.13 and mark it stable

>
> The backtrace from the crash confirms the second scenario:
> [ 70.661551] ==================================================================
> [ 70.662764] BUG: KASAN: null-ptr-deref in io_uring_cancel_sqpoll+0x203/0x350
> [ 70.663834] Write of size 4 at addr 0000000000000060 by task iou-sqp-750/755
> [ 70.664025]
> [ 70.664025] CPU: 1 PID: 755 Comm: iou-sqp-750 Not tainted 5.12.0 #101
> [ 70.664025] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996),
> BIOS 1.14.0-1 04/01/2014
> [ 70.664025] Call Trace:
> [ 70.664025] dump_stack+0xe9/0x168
> [ 70.664025] ? io_uring_cancel_sqpoll+0x203/0x350
> [ 70.664025] __kasan_report+0x166/0x1c0
> [ 70.664025] ? io_uring_cancel_sqpoll+0x203/0x350
> [ 70.664025] kasan_report+0x4f/0x70
> [ 70.664025] kasan_check_range+0x2f3/0x340
> [ 70.664025] __kasan_check_write+0x14/0x20
> [ 70.664025] io_uring_cancel_sqpoll+0x203/0x350
> [ 70.664025] ? io_sq_thread_unpark+0xd0/0xd0
> [ 70.664025] ? mutex_lock+0xbb/0x130
> [ 70.664025] ? init_wait_entry+0xe0/0xe0
> [ 70.664025] ? wait_for_completion_killable_timeout+0x20/0x20
> [ 70.664025] io_sq_thread+0x174c/0x18c0
> [ 70.664025] ? io_rsrc_put_work+0x380/0x380
> [ 70.664025] ? init_wait_entry+0xe0/0xe0
> [ 70.664025] ? _raw_spin_lock_irq+0xa5/0x180
> [ 70.664025] ? _raw_spin_lock_irqsave+0x190/0x190
> [ 70.664025] ? calculate_sigpending+0x6b/0xa0
> [ 70.664025] ? io_rsrc_put_work+0x380/0x380
> [ 70.664025] ret_from_fork+0x22/0x30
>
> We might want to add additional validation before calling
> io_uring_cancel_sqpoll. I did verify that the reproducer stopped
> producing the bug after the following change.
> ---
> diff --git a/fs/io_uring.c b/fs/io_uring.c
> index dff34975d86b..36fc9abe8022 100644
> --- a/fs/io_uring.c
> +++ b/fs/io_uring.c
> @@ -6832,8 +6832,10 @@ static int io_sq_thread(void *data)
> timeout = jiffies + sqd->sq_thread_idle;
> }
>
> - list_for_each_entry(ctx, &sqd->ctx_list, sqd_list)
> - io_uring_cancel_sqpoll(ctx);
> + list_for_each_entry(ctx, &sqd->ctx_list, sqd_list) {
> + if (current->io_uring)
> + io_uring_cancel_sqpoll(ctx);
> + }
> sqd->thread = NULL;
> list_for_each_entry(ctx, &sqd->ctx_list, sqd_list)
> io_ring_set_wakeup_flag(ctx);
>

--
Pavel Begunkov

\
 
 \ /
  Last update: 2021-04-27 13:21    [W:0.091 / U:1.520 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site