lkml.org 
[lkml]   [2019]   [Jun]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Subject[PATCH v2 0/2] x86/asm: Pin sensitive CR4 and CR0 bits
Date
Hi,

Here's a v2 that hopefully addresses the concerns from the v1 thread[1] on
CR4 pinning. Now it's using static branches to avoid potential atomicity
problems (though perhaps that is overkill), and it has dropped the
needless volatile marking in favor of proper asm constraint flags. The
one piece that eluded me, but which I think is okay, is delaying bit
setting on a per-CPU basis. But since the bits are global state and we
don't have read-only per-CPU data, it seemed safe as I've got it here.

Full patch 1 commit log follows, just in case it's useful to have it
in this cover letter...

[1] https://lkml.kernel.org/r/CAHk-=wjNes0wn0KUMMY6dOK_sN69z2TiGpDZ2cyzYF07s64bXQ@mail.gmail.com

-Kees


Several recent exploits have used direct calls to the native_write_cr4()
function to disable SMEP and SMAP before then continuing their exploits
using userspace memory access. This pins bits of CR4 so that they cannot
be changed through a common function. This is not intended to be general
ROP protection (which would require CFI to defend against properly), but
rather a way to avoid trivial direct function calling (or CFI bypasses
via a matching function prototype) as seen in:

https://googleprojectzero.blogspot.com/2017/05/exploiting-linux-kernel-via-packet.html
(https://github.com/xairy/kernel-exploits/tree/master/CVE-2017-7308)

The goals of this change:
- pin specific bits (SMEP, SMAP, and UMIP) when writing CR4.
- avoid setting the bits too early (they must become pinned only after
CPU feature detection and selection has finished).
- pinning mask needs to be read-only during normal runtime.
- pinning needs to be checked after write to avoid jumps past the
preceding "or".

Using __ro_after_init on the mask is done so it can't be first disabled
with a malicious write.

Since these bits are global state (once established by the boot CPU
and kernel boot parameters), they are safe to write to secondary CPUs
before those CPUs have finished feature detection. As such, the bits are
written with an "or" performed before the register write as that is both
easier and uses a few bytes less storage of a location we don't have:
read-only per-CPU data. (Note that initialization via cr4_init_shadow()
isn't early enough to avoid early native_write_cr4() calls.)

A check is performed after the register write because an attack could
just skip over the "or" before the register write. Such a direct jump
is possible because of how this function may be built by the compiler
(especially due to the removal of frame pointers) where it doesn't add
a stack frame (function exit may only be a retq without pops) which
is sufficient for trivial exploitation like in the timer overwrites
mentioned above).

The asm argument constraints gain the "+" modifier to convince the
compiler that it shouldn't make ordering assumptions about the arguments
or memory, and treat them as changed.

---
v2:
- move setup until after CPU feature detection and selection.
- refactor to use static branches to have atomic enabling.
- only perform the "or" after a failed check.
---

Kees Cook (2):
x86/asm: Pin sensitive CR4 bits
x86/asm: Pin sensitive CR0 bits

arch/x86/include/asm/special_insns.h | 41 ++++++++++++++++++++++++++--
arch/x86/kernel/cpu/common.c | 18 ++++++++++++
2 files changed, 57 insertions(+), 2 deletions(-)

--
2.17.1

\
 
 \ /
  Last update: 2019-06-05 01:46    [W:0.125 / U:0.820 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site