lkml.org 
[lkml]   [2020]   [Mar]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v2 5/5] arm64: entry: Enable random_kstack_offset support
On Tue, Mar 24, 2020 at 01:32:31PM -0700, Kees Cook wrote:
> Allow for a randomized stack offset on a per-syscall basis, with roughly
> 5 bits of entropy.
>
> Signed-off-by: Kees Cook <keescook@chromium.org>

Just to check, do you have an idea of the impact on arm64? Patch 3 had
figures for x86 where it reads the TSC, and it's unclear to me how
get_random_int() compares to that.

Otherwise, this looks sound to me; I'd jsut like to know whether the
overhead is in the same ballpark.

Thanks
Mark.

> ---
> arch/arm64/Kconfig | 1 +
> arch/arm64/kernel/syscall.c | 10 ++++++++++
> 2 files changed, 11 insertions(+)
>
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index 0b30e884e088..4d5aa4959f72 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -127,6 +127,7 @@ config ARM64
> select HAVE_ARCH_MMAP_RND_BITS
> select HAVE_ARCH_MMAP_RND_COMPAT_BITS if COMPAT
> select HAVE_ARCH_PREL32_RELOCATIONS
> + select HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET
> select HAVE_ARCH_SECCOMP_FILTER
> select HAVE_ARCH_STACKLEAK
> select HAVE_ARCH_THREAD_STRUCT_WHITELIST
> diff --git a/arch/arm64/kernel/syscall.c b/arch/arm64/kernel/syscall.c
> index a12c0c88d345..238dbd753b44 100644
> --- a/arch/arm64/kernel/syscall.c
> +++ b/arch/arm64/kernel/syscall.c
> @@ -5,6 +5,7 @@
> #include <linux/errno.h>
> #include <linux/nospec.h>
> #include <linux/ptrace.h>
> +#include <linux/randomize_kstack.h>
> #include <linux/syscalls.h>
>
> #include <asm/daifflags.h>
> @@ -42,6 +43,8 @@ static void invoke_syscall(struct pt_regs *regs, unsigned int scno,
> {
> long ret;
>
> + add_random_kstack_offset();
> +
> if (scno < sc_nr) {
> syscall_fn_t syscall_fn;
> syscall_fn = syscall_table[array_index_nospec(scno, sc_nr)];
> @@ -51,6 +54,13 @@ static void invoke_syscall(struct pt_regs *regs, unsigned int scno,
> }
>
> regs->regs[0] = ret;
> +
> + /*
> + * Since the compiler chooses a 4 bit alignment for the stack,
> + * let's save one additional bit (9 total), which gets us up
> + * near 5 bits of entropy.
> + */
> + choose_random_kstack_offset(get_random_int() & 0x1FF);
> }
>
> static inline bool has_syscall_work(unsigned long flags)
> --
> 2.20.1
>

\
 
 \ /
  Last update: 2020-03-25 14:22    [W:0.454 / U:0.264 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site