lkml.org 
[lkml]   [2017]   [May]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 4.11 044/197] x86: fix 32-bit case of __get_user_asm_u64()
    Date
    4.11-stable review patch.  If anyone has any objections, please let me know.

    ------------------

    From: Linus Torvalds <torvalds@linux-foundation.org>

    commit 33c9e9729033387ef0521324c62e7eba529294af upstream.

    The code to fetch a 64-bit value from user space was entirely buggered,
    and has been since the code was merged in early 2016 in commit
    b2f680380ddf ("x86/mm/32: Add support for 64-bit __get_user() on 32-bit
    kernels").

    Happily the buggered routine is almost certainly entirely unused, since
    the normal way to access user space memory is just with the non-inlined
    "get_user()", and the inlined version didn't even historically exist.

    The normal "get_user()" case is handled by external hand-written asm in
    arch/x86/lib/getuser.S that doesn't have either of these issues.

    There were two independent bugs in __get_user_asm_u64():

    - it still did the STAC/CLAC user space access marking, even though
    that is now done by the wrapper macros, see commit 11f1a4b9755f
    ("x86: reorganize SMAP handling in user space accesses").

    This didn't result in a semantic error, it just means that the
    inlined optimized version was hugely less efficient than the
    allegedly slower standard version, since the CLAC/STAC overhead is
    quite high on modern Intel CPU's.

    - the double register %eax/%edx was marked as an output, but the %eax
    part of it was touched early in the asm, and could thus clobber other
    inputs to the asm that gcc didn't expect it to touch.

    In particular, that meant that the generated code could look like
    this:

    mov (%eax),%eax
    mov 0x4(%eax),%edx

    where the load of %edx obviously was _supposed_ to be from the 32-bit
    word that followed the source of %eax, but because %eax was
    overwritten by the first instruction, the source of %edx was
    basically random garbage.

    The fixes are trivial: remove the extraneous STAC/CLAC entries, and mark
    the 64-bit output as early-clobber to let gcc know that no inputs should
    alias with the output register.

    Cc: Al Viro <viro@zeniv.linux.org.uk>
    Cc: Benjamin LaHaise <bcrl@kvack.org>
    Cc: Ingo Molnar <mingo@kernel.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

    ---
    arch/x86/include/asm/uaccess.h | 6 +++---
    1 file changed, 3 insertions(+), 3 deletions(-)

    --- a/arch/x86/include/asm/uaccess.h
    +++ b/arch/x86/include/asm/uaccess.h
    @@ -324,10 +324,10 @@ do { \
    #define __get_user_asm_u64(x, ptr, retval, errret) \
    ({ \
    __typeof__(ptr) __ptr = (ptr); \
    - asm volatile(ASM_STAC "\n" \
    + asm volatile("\n" \
    "1: movl %2,%%eax\n" \
    "2: movl %3,%%edx\n" \
    - "3: " ASM_CLAC "\n" \
    + "3:\n" \
    ".section .fixup,\"ax\"\n" \
    "4: mov %4,%0\n" \
    " xorl %%eax,%%eax\n" \
    @@ -336,7 +336,7 @@ do { \
    ".previous\n" \
    _ASM_EXTABLE(1b, 4b) \
    _ASM_EXTABLE(2b, 4b) \
    - : "=r" (retval), "=A"(x) \
    + : "=r" (retval), "=&A"(x) \
    : "m" (__m(__ptr)), "m" __m(((u32 *)(__ptr)) + 1), \
    "i" (errret), "0" (retval)); \
    })

    \
     
     \ /
      Last update: 2017-05-23 22:14    [W:3.623 / U:0.636 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site