lkml.org 
[lkml]   [2020]   [Jun]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    From
    Date
    SubjectRe: [PATCH 3/7] x86/entry: Fixup bad_iret vs noinstr
    On Thu, 18 Jun 2020 at 16:50, Peter Zijlstra <peterz@infradead.org> wrote:
    >
    > vmlinux.o: warning: objtool: fixup_bad_iret()+0x8e: call to memcpy() leaves .noinstr.text section
    >
    > Worse, when KASAN there is no telling what memcpy() actually is. Force
    > the use of __memcpy() which is our assmebly implementation.
    >
    > Reported-by: Marco Elver <elver@google.com>
    > Suggested-by: Marco Elver <elver@google.com>
    > Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>

    KASAN no longer crashes, although the stack size increase appears to
    be sufficient for the particular case I ran into.

    Tested-by: Marco Elver <elver@google.com>

    Thanks!

    > ---
    > arch/x86/kernel/traps.c | 6 +++---
    > arch/x86/lib/memcpy_64.S | 4 ++++
    > 2 files changed, 7 insertions(+), 3 deletions(-)
    >
    > --- a/arch/x86/kernel/traps.c
    > +++ b/arch/x86/kernel/traps.c
    > @@ -685,13 +685,13 @@ struct bad_iret_stack *fixup_bad_iret(st
    > (struct bad_iret_stack *)__this_cpu_read(cpu_tss_rw.x86_tss.sp0) - 1;
    >
    > /* Copy the IRET target to the temporary storage. */
    > - memcpy(&tmp.regs.ip, (void *)s->regs.sp, 5*8);
    > + __memcpy(&tmp.regs.ip, (void *)s->regs.sp, 5*8);
    >
    > /* Copy the remainder of the stack from the current stack. */
    > - memcpy(&tmp, s, offsetof(struct bad_iret_stack, regs.ip));
    > + __memcpy(&tmp, s, offsetof(struct bad_iret_stack, regs.ip));
    >
    > /* Update the entry stack */
    > - memcpy(new_stack, &tmp, sizeof(tmp));
    > + __memcpy(new_stack, &tmp, sizeof(tmp));
    >
    > BUG_ON(!user_mode(&new_stack->regs));
    > return new_stack;
    > --- a/arch/x86/lib/memcpy_64.S
    > +++ b/arch/x86/lib/memcpy_64.S
    > @@ -8,6 +8,8 @@
    > #include <asm/alternative-asm.h>
    > #include <asm/export.h>
    >
    > +.pushsection .noinstr.text, "ax"
    > +
    > /*
    > * We build a jump to memcpy_orig by default which gets NOPped out on
    > * the majority of x86 CPUs which set REP_GOOD. In addition, CPUs which
    > @@ -184,6 +186,8 @@ SYM_FUNC_START_LOCAL(memcpy_orig)
    > retq
    > SYM_FUNC_END(memcpy_orig)
    >
    > +.popsection
    > +
    > #ifndef CONFIG_UML
    >
    > MCSAFE_TEST_CTL
    >
    >

    \
     
     \ /
      Last update: 2020-06-18 17:15    [W:4.421 / U:0.264 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site