lkml.org 
[lkml]   [2021]   [Jan]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH v7 45/72] x86/entry/64: Add entry code for #VC handler
> +
> + /*
> + * No need to switch back to the IST stack. The current stack is either
> + * identical to the stack in the IRET frame or the VC fall-back stack,
> + * so it is definitly mapped even with PTI enabled.
> + */
> + jmp paranoid_exit
> +
>

Hello

I know we don't enable PTI on AMD, but the above comment doesn't align to the
next code.

We assume PTI is enabled as the comments said "even with PTI enabled".

When #VC happens after entry_SYSCALL_64 but before it switches to the
kernel CR3. vc_switch_off_ist() will switch the stack to the kernel stack
and paranoid_exit can't work when it switches to user CR3 on the kernel stack.

The comment above lost information that the current stack is possible to be
the kernel stack which is mapped not user CR3.

Maybe I missed something.

Thanks
Lai

> +#ifdef CONFIG_AMD_MEM_ENCRYPT
> +asmlinkage __visible noinstr struct pt_regs *vc_switch_off_ist(struct pt_regs *regs)
> +{
> + unsigned long sp, *stack;
> + struct stack_info info;
> + struct pt_regs *regs_ret;
> +
> + /*
> + * In the SYSCALL entry path the RSP value comes from user-space - don't
> + * trust it and switch to the current kernel stack
> + */
> + if (regs->ip >= (unsigned long)entry_SYSCALL_64 &&
> + regs->ip < (unsigned long)entry_SYSCALL_64_safe_stack) {
> + sp = this_cpu_read(cpu_current_top_of_stack);
> + goto sync;
> + }

\
 
 \ /
  Last update: 2021-01-24 15:15    [W:0.739 / U:0.024 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site