lkml.org 
[lkml]   [2022]   [Sep]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH] powerpc: Save AMR/IAMR when switching tasks
Date
Christophe Leroy <christophe.leroy@csgroup.eu> writes:
> Le 19/09/2022 à 14:37, Michael Ellerman a écrit :
>> Christophe Leroy <christophe.leroy@csgroup.eu> writes:
>>> Le 16/09/2022 à 07:05, Samuel Holland a écrit :
>>>> With CONFIG_PREEMPT=y (involuntary preemption enabled), it is possible
>>>> to switch away from a task inside copy_{from,to}_user. This left the CPU
>>>> with userspace access enabled until after the next IRQ or privilege
>>>> level switch, when AMR/IAMR got reset to AMR_KU[AE]P_BLOCKED. Then, when
>>>> switching back to the original task, the userspace access would fault:
>>>
>>> This is not supposed to happen. You never switch away from a task
>>> magically. Task switch will always happen in an interrupt, that means
>>> copy_{from,to}_user() get interrupted.
>>
>> Unfortunately this isn't true when CONFIG_PREEMPT=y.
>
> Argh, yes, I wrote the above with the assumption that we properly follow
> the main principles that no complex fonction is to be used while KUAP is
> open ... Which is apparently not true here. x86 would have detected it
> with objtool, but we don't have it yet in powerpc.

Yes and yes :/

>> We can switch away without an interrupt via:
>> __copy_tofrom_user()
>> -> __copy_tofrom_user_power7()
>> -> exit_vmx_usercopy()
>> -> preempt_enable()
>> -> __preempt_schedule()
>> -> preempt_schedule()
>> -> preempt_schedule_common()
>> -> __schedule()
>
>
> Should we use preempt_enable_no_resched() to avoid that ?

Good point :)

...
>>
>> Still I think it might be our best option for an easy fix.
>
> Wouldn't it be even easier and less abusive to use
> preemt_enable_no_resched() ? Or is there definitively a good reason to
> resched after a VMX copy while we don't with regular copies ?

I don't think it's important to reschedule there. I guess it means
another task that wants to preempt will have to wait a little longer,
but I doubt it's measurable.

One reason to do the KUAP lock is it also protects us from running
disable_kernel_altivec() with KUAP unlocked.

That in turn calls into msr_check_and_clear() and
__msr_check_and_clear(), none of which is a problem per-se, but we'd
need to make that all notrace to be 100% safe.

At the moment we're running that all with KUAP unlocked anyway, so using
preempt_enable_no_resched() would fix the actual bug and is much nicer
than my patch, so we should probably do that.

We can look at making disable_kernel_altivec() etc. notrace as a
follow-up.

cheers

\
 
 \ /
  Last update: 2022-09-21 15:01    [W:0.302 / U:0.032 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site