lkml.org 
[lkml]   [2018]   [Jan]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
SubjectRe: [PATCH v2 bpf] bpf: introduce BPF_JIT_ALWAYS_ON config
From
Date
On 01/09/2018 05:52 AM, Alexei Starovoitov wrote:
> The BPF interpreter has been used as part of the spectre 2 attack CVE-2017-5715.
>
> A quote from goolge project zero blog:
> "At this point, it would normally be necessary to locate gadgets in
> the host kernel code that can be used to actually leak data by reading
> from an attacker-controlled location, shifting and masking the result
> appropriately and then using the result of that as offset to an
> attacker-controlled address for a load. But piecing gadgets together
> and figuring out which ones work in a speculation context seems annoying.
> So instead, we decided to use the eBPF interpreter, which is built into
> the host kernel - while there is no legitimate way to invoke it from inside
> a VM, the presence of the code in the host kernel's text section is sufficient
> to make it usable for the attack, just like with ordinary ROP gadgets."
>
> To make attacker job harder introduce BPF_JIT_ALWAYS_ON config
> option that removes interpreter from the kernel in favor of JIT-only mode.
> So far eBPF JIT is supported by:
> x64, arm64, arm32, sparc64, s390, powerpc64, mips64
>
> The start of JITed program is randomized and code page is marked as read-only.
> In addition "constant blinding" can be turned on with net.core.bpf_jit_harden
>
> v1->v2:
> - fix init order, test_bpf and cBPF (Daniel's feedback)
> - fix offloaded bpf (Jakub's feedback)
> - add 'return 0' dummy in case something can invoke prog->bpf_func
> - retarget bpf tree. For bpf-next the patch would need one extra hunk.
> It will be sent when the trees are merged back to net-next
>
> Considered doing:
> int bpf_jit_enable __read_mostly = BPF_EBPF_JIT_DEFAULT;
> but it seems better to land the patch as-is and in bpf-next remove
> bpf_jit_enable global variable from all JITs, consolidate in one place
> and remove this jit_init() function.

Ok, makes sense.

[...]

Still one minor thing left:

> @@ -1354,6 +1357,12 @@ static int bpf_check_tail_call(const struct bpf_prog *fp)
> return 0;
> }
>
> +static unsigned int __bpf_prog_ret0(const void *ctx,
> + const struct bpf_insn *insn)
> +{
> + return 0;
> +}

When CONFIG_BPF_JIT_ALWAYS_ON is disabled, this will throw the following
warning:

[...]
CC kernel/bpf/core.o
kernel/bpf/core.c:1360:21: warning: ‘__bpf_prog_ret0’ defined but not used [-Wunused-function]
static unsigned int __bpf_prog_ret0(const void *ctx,
^~~~~~~~~~~~~~~

Probably just best to wrap it under ifdef CONFIG_BPF_JIT_ALWAYS_ON.

> /**
> * bpf_prog_select_runtime - select exec runtime for BPF program
> * @fp: bpf_prog populated with internal BPF program
> @@ -1364,9 +1373,13 @@ static int bpf_check_tail_call(const struct bpf_prog *fp)
> */
> struct bpf_prog *bpf_prog_select_runtime(struct bpf_prog *fp, int *err)
> {
> +#ifndef CONFIG_BPF_JIT_ALWAYS_ON
> u32 stack_depth = max_t(u32, fp->aux->stack_depth, 1);
>
> fp->bpf_func = interpreters[(round_up(stack_depth, 32) / 32) - 1];
> +#else
> + fp->bpf_func = __bpf_prog_ret0;
> +#endif
>
> /* eBPF JITs can rewrite the program in case constant
> * blinding is active. However, in case of error during
> @@ -1376,6 +1389,12 @@ struct bpf_prog *bpf_prog_select_runtime(struct bpf_prog *fp, int *err)
> */
> if (!bpf_prog_is_dev_bound(fp->aux)) {
> fp = bpf_int_jit_compile(fp);
> +#ifdef CONFIG_BPF_JIT_ALWAYS_ON
> + if (!fp->jited) {
> + *err = -ENOTSUPP;
> + return fp;
> + }
> +#endif
> } else {
> *err = bpf_prog_offload_compile(fp);
> if (*err)

\
 
 \ /
  Last update: 2018-01-14 23:19    [W:0.065 / U:4.060 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site