lkml.org 
[lkml]   [2020]   [Jun]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH v4 3/5] stack: Optionally randomize kernel stack offset each syscall
On Mon, Jun 22, 2020 at 12:31:44PM -0700, Kees Cook wrote:
> +
> +#define add_random_kstack_offset() do { \
> + if (static_branch_maybe(CONFIG_RANDOMIZE_KSTACK_OFFSET_DEFAULT, \
> + &randomize_kstack_offset)) { \
> + u32 offset = this_cpu_read(kstack_offset); \
> + u8 *ptr = __builtin_alloca(offset & 0x3FF); \
> + asm volatile("" : "=m"(*ptr)); \
> + } \
> +} while (0)

This feels a little fragile. ptr doesn't escape the block, so the
compiler is free to restore the stack immediately after this block. In
fact, given that all you've said is that the asm modifies *ptr, but
nothing uses that output, the compiler could eliminate the whole thing,
no?

https://godbolt.org/z/HT43F5

gcc restores the stack immediately, if no function calls come after it.

clang completely eliminates the code if no function calls come after.

I'm not sure why function calls should affect it, though, given that
those functions can't possibly access ptr or the memory it points to.

A full memory barrier (like barrier_data) should be better -- it gives
the compiler a reason to believe that ptr might escape and be accessed
by any code following the block?

\
 
 \ /
  Last update: 2020-06-23 00:57    [W:0.189 / U:0.932 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site