lkml.org 
[lkml]   [2020]   [Sep]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH V3 1/3] x86/entry: avoid calling into sync_regs() when entering from userspace
On Mon, Aug 17, 2020 at 8:23 AM Lai Jiangshan <jiangshanlai@gmail.com> wrote:
> 7f2590a110b8("x86/entry/64: Use a per-CPU trampoline stack for IDT entries")
> made a change that when any exception happens on userspace, the
> entry code will save the pt_regs on the sp0 stack, and then copy it
> to the thread stack via sync_regs() and switch to thread stack
> afterward.
>
> And recent x86/entry work makes interrupt also use idtentry
> and makes all the interrupt code save the pt_regs on the sp0 stack
> and then copy it to the thread stack like exception.
>
> This is hot path (page fault, ipi), such overhead should be avoided.
> This patch borrows the way how original interrupt_entry handles it.
> It switches to the thread stack directly right away when comes
> from userspace.
>
> Signed-off-by: Lai Jiangshan <laijs@linux.alibaba.com>

As far as I can see, on systems affected by Meltdown, this patch fixes
register state leakage between tasks because any data that is written
to the per-CPU trampoline stacks must be considered visible to all
userspace. I think that makes this a fix that should go into stable
kernels.

Therefore, please add:

Fixes: 7f2590a110b8 ("x86/entry/64: Use a per-CPU trampoline stack for
IDT entries")
Cc: stable@vger.kernel.org


> ---
> arch/x86/entry/entry_64.S | 43 +++++++++++++++++++++++++++++++--------
> 1 file changed, 34 insertions(+), 9 deletions(-)
>
> diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
> index 70dea9337816..1a7715430da3 100644
> --- a/arch/x86/entry/entry_64.S
> +++ b/arch/x86/entry/entry_64.S
> @@ -928,19 +928,42 @@ SYM_CODE_END(paranoid_exit)
> SYM_CODE_START_LOCAL(error_entry)
> UNWIND_HINT_FUNC
> cld
> - PUSH_AND_CLEAR_REGS save_ret=1
> - ENCODE_FRAME_POINTER 8
> - testb $3, CS+8(%rsp)
> + testb $3, CS-ORIG_RAX+8(%rsp)
> jz .Lerror_kernelspace
>
> - /*
> - * We entered from user mode or we're pretending to have entered
> - * from user mode due to an IRET fault.
> - */

As far as I can tell, this comment is still correct, and it is
helpful. Why are you removing it?

> SWAPGS
> FENCE_SWAPGS_USER_ENTRY
> - /* We have user CR3. Change to kernel CR3. */
> - SWITCH_TO_KERNEL_CR3 scratch_reg=%rax
> + /*
> + * Switch to the thread stack. The IRET frame and orig_ax are
> + * on the stack, as well as the return address. RDI..R12 are

Did you mean RDI..R15?

> + * not (yet) on the stack and space has not (yet) been
> + * allocated for them.
> + */
> + pushq %rdx
> +
> + /* Need to switch before accessing the thread stack. */
> + SWITCH_TO_KERNEL_CR3 scratch_reg=%rdx
> + movq %rsp, %rdx
> + movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp

Can we avoid spilling %rdx to the meltdown-readable entry stack here?
We could do something similar to what entry_SYSCALL_64 does, roughly
like this:


/*
* While there is an iret frame, it won't be easy to find for a
* few instructions, so let's pretend it doesn't exist.
*/
UNWIND_HINT_EMPTY

/*
* Switch to kernel CR3 and stack. To avoid spilling secret
* userspace register state to the trampoline stack, we use
* RSP as scratch - we can reconstruct the old RSP afterwards
* using TSS_sp0.
*/
SWITCH_TO_KERNEL_CR3 scratch_reg=%rsp
movq PER_CPU_VAR(cpu_current_top_of_stack), %rsp

pushq %rdx /* scratch, will be replaced with regs->ss later */
mov PER_CPU_VAR(cpu_tss_rw + TSS_sp0), %rdx
sub $7*8, %rdx /* return address, orig_ax, IRET frame */
/*
* We have return address and orig_ax on the stack on
* top of the IRET frame. That means offset=2*8
*/
UNWIND_HINT_IRET_REGS base=%rdx offset=-5*8

pushq -2*8(%rdx) /* regs->rsp */
pushq -3*8(%rdx) /* regs->eflags */
pushq -4*8(%rdx) /* regs->cs */
pushq -5*8(%rdx) /* regs->ip */
pushq -6*8(%rdx) /* regs->orig_ax */
pushq -7*8(%rdx) /* return address */
UNWIND_HINT_FUNC

PUSH_AND_CLEAR_REGS rdx=7*8(%rsp), save_ret=1

/* copy regs->ss from trampoline stack */
movq PER_CPU_VAR(cpu_tss_rw + TSS_sp0), %rax
mov -1*8(%rax), %rax
movq %rax, 20*8(%rsp)

ENCODE_FRAME_POINTER 8

ret


Does something like that seem like a reasonable idea?

> + /*
> + * We have RDX, return address, and orig_ax on the stack on
> + * top of the IRET frame. That means offset=24
> + */
> + UNWIND_HINT_IRET_REGS base=%rdx offset=24
> +
> + pushq 7*8(%rdx) /* regs->ss */
> + pushq 6*8(%rdx) /* regs->rsp */
> + pushq 5*8(%rdx) /* regs->eflags */
> + pushq 4*8(%rdx) /* regs->cs */
> + pushq 3*8(%rdx) /* regs->ip */
> + pushq 2*8(%rdx) /* regs->orig_ax */
> + pushq 8(%rdx) /* return address */
> + UNWIND_HINT_FUNC
> +
> + PUSH_AND_CLEAR_REGS rdx=(%rdx), save_ret=1
> + ENCODE_FRAME_POINTER 8
> + ret

\
 
 \ /
  Last update: 2020-09-11 23:24    [W:5.954 / U:0.020 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site