lkml.org 
[lkml]   [2018]   [Jan]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH 2/4] x86/retpoline: Avoid return buffer underflows on context switch
Date
From: Andi Kleen <ak@linux.intel.com>

CPUs have return buffers which store the return address for
RET to predict function returns. Some CPUs (Skylake, some Broadwells)
can fall back to indirect branch prediction on return buffer underflow.

With retpoline we want to avoid uncontrolled indirect branches,
which could be poisoned by ring 3, so we need to avoid uncontrolled
return buffer underflows in the kernel.

This can happen when we're context switching from a shallower to a
deeper kernel stack. The deeper kernel stack would eventually underflow
the return buffer, which again would fall back to the indirect branch predictor.

The other thread could be running a system call trigger
by an attacker too, so the context switch would help the attacked
thread to fall back to an uncontrolled indirect branch,
which then would use the values passed in by the attacker.

To guard against this fill the return buffer with controlled
content during context switch. This prevents any underflows.

This is only enabled on Skylake.

Signed-off-by: Andi Kleen <ak@linux.intel.com>
---
arch/x86/entry/entry_32.S | 14 ++++++++++++++
arch/x86/entry/entry_64.S | 14 ++++++++++++++
2 files changed, 28 insertions(+)

diff --git a/arch/x86/entry/entry_32.S b/arch/x86/entry/entry_32.S
index a1f28a54f23a..bbecb7c2f6cb 100644
--- a/arch/x86/entry/entry_32.S
+++ b/arch/x86/entry/entry_32.S
@@ -250,6 +250,20 @@ ENTRY(__switch_to_asm)
popl %ebx
popl %ebp

+ /*
+ * When we switch from a shallower to a deeper call stack
+ * the call stack will underflow in the kernel in the next task.
+ * This could cause the CPU to fall back to indirect branch
+ * prediction, which may be poisoned.
+ *
+ * To guard against that always fill the return stack with
+ * known values.
+ *
+ * We do this in assembler because it needs to be before
+ * any calls on the new stack, and this can be difficult to
+ * ensure in a complex C function like __switch_to.
+ */
+ FILL_RETURN_BUFFER %ecx, RSB_FILL_LOOPS, X86_FEATURE_RETURN_UNDERFLOW
jmp __switch_to
END(__switch_to_asm)

diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
index 59874bc1aed2..3caac129cd07 100644
--- a/arch/x86/entry/entry_64.S
+++ b/arch/x86/entry/entry_64.S
@@ -495,6 +495,20 @@ ENTRY(__switch_to_asm)
popq %rbx
popq %rbp

+ /*
+ * When we switch from a shallower to a deeper call stack
+ * the call stack will underflow in the kernel in the next task.
+ * This could cause the CPU to fall back to indirect branch
+ * prediction, which may be poisoned.
+ *
+ * To guard against that always fill the return stack with
+ * known values.
+ *
+ * We do this in assembler because it needs to be before
+ * any calls on the new stack, and this can be difficult to
+ * ensure in a complex C function like __switch_to.
+ */
+ FILL_RETURN_BUFFER %r8, RSB_FILL_LOOPS, X86_FEATURE_RETURN_UNDERFLOW
jmp __switch_to
END(__switch_to_asm)

--
2.14.3
\
 
 \ /
  Last update: 2018-01-14 23:26    [W:0.082 / U:0.508 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site