lkml.org 
[lkml]   [2019]   [Jul]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 10/22] bpf: Disable GCC -fgcse optimization for ___bpf_prog_run()
    Date
    On x86-64, with CONFIG_RETPOLINE=n, GCC's "global common subexpression
    elimination" optimization results in ___bpf_prog_run()'s jumptable code
    changing from this:

    select_insn:
    jmp *jumptable(, %rax, 8)
    ...
    ALU64_ADD_X:
    ...
    jmp *jumptable(, %rax, 8)
    ALU_ADD_X:
    ...
    jmp *jumptable(, %rax, 8)

    to this:

    select_insn:
    mov jumptable, %r12
    jmp *(%r12, %rax, 8)
    ...
    ALU64_ADD_X:
    ...
    jmp *(%r12, %rax, 8)
    ALU_ADD_X:
    ...
    jmp *(%r12, %rax, 8)

    The jumptable address is placed in a register once, at the beginning of
    the function. The function execution can then go through multiple
    indirect jumps which rely on that same register value. This has a few
    issues:

    1) Objtool isn't smart enough to be able to track such a register value
    across multiple recursive indirect jumps through the jump table.

    2) With CONFIG_RETPOLINE enabled, this optimization actually results in
    a small slowdown. I measured a ~4.7% slowdown in the test_bpf
    "tcpdump port 22" selftest.

    This slowdown is actually predicted by the GCC manual:

    Note: When compiling a program using computed gotos, a GCC
    extension, you may get better run-time performance if you
    disable the global common subexpression elimination pass by
    adding -fno-gcse to the command line.

    So just disable the optimization for this function.

    Fixes: e55a73251da3 ("bpf: Fix ORC unwinding in non-JIT BPF code")
    Reported-by: Randy Dunlap <rdunlap@infradead.org>
    Signed-off-by: Josh Poimboeuf <jpoimboe@redhat.com>
    Acked-by: Alexei Starovoitov <ast@kernel.org>
    ---
    Cc: Alexei Starovoitov <ast@kernel.org>
    Cc: Daniel Borkmann <daniel@iogearbox.net>
    ---
    include/linux/compiler-gcc.h | 2 ++
    include/linux/compiler_types.h | 4 ++++
    kernel/bpf/core.c | 2 +-
    3 files changed, 7 insertions(+), 1 deletion(-)

    diff --git a/include/linux/compiler-gcc.h b/include/linux/compiler-gcc.h
    index e8579412ad21..d7ee4c6bad48 100644
    --- a/include/linux/compiler-gcc.h
    +++ b/include/linux/compiler-gcc.h
    @@ -170,3 +170,5 @@
    #else
    #define __diag_GCC_8(s)
    #endif
    +
    +#define __no_fgcse __attribute__((optimize("-fno-gcse")))
    diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h
    index 095d55c3834d..599c27b56c29 100644
    --- a/include/linux/compiler_types.h
    +++ b/include/linux/compiler_types.h
    @@ -189,6 +189,10 @@ struct ftrace_likely_data {
    #define asm_volatile_goto(x...) asm goto(x)
    #endif

    +#ifndef __no_fgcse
    +# define __no_fgcse
    +#endif
    +
    /* Are two types/vars the same type (ignoring qualifiers)? */
    #define __same_type(a, b) __builtin_types_compatible_p(typeof(a), typeof(b))

    diff --git a/kernel/bpf/core.c b/kernel/bpf/core.c
    index 7e98f36a14e2..8191a7db2777 100644
    --- a/kernel/bpf/core.c
    +++ b/kernel/bpf/core.c
    @@ -1295,7 +1295,7 @@ bool bpf_opcode_in_insntable(u8 code)
    *
    * Decode and execute eBPF instructions.
    */
    -static u64 ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, u64 *stack)
    +static u64 __no_fgcse ___bpf_prog_run(u64 *regs, const struct bpf_insn *insn, u64 *stack)
    {
    #define BPF_INSN_2_LBL(x, y) [BPF_##x | BPF_##y] = &&x##_##y
    #define BPF_INSN_3_LBL(x, y, z) [BPF_##x | BPF_##y | BPF_##z] = &&x##_##y##_##z
    --
    2.20.1
    \
     
     \ /
      Last update: 2019-07-15 02:38    [W:4.181 / U:0.624 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site