lkml.org 
[lkml]   [2015]   [Jun]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    Date
    From
    Subject[tip:x86/asm] x86/asm/entry/32: Remove unnecessary optimization in stub32_clone
    Commit-ID:  7a5a9824c18f93415944c997dc6bb8eecfddd2e7
    Gitweb: http://git.kernel.org/tip/7a5a9824c18f93415944c997dc6bb8eecfddd2e7
    Author: Denys Vlasenko <dvlasenk@redhat.com>
    AuthorDate: Wed, 3 Jun 2015 15:58:50 +0200
    Committer: Ingo Molnar <mingo@kernel.org>
    CommitDate: Fri, 5 Jun 2015 13:41:28 +0200

    x86/asm/entry/32: Remove unnecessary optimization in stub32_clone

    Really swap arguments #4 and #5 in stub32_clone instead of
    "optimizing" it into a move.

    Yes, tls_val is currently unused. Yes, on some CPUs XCHG is a
    little bit more expensive than MOV. But a cycle or two on an
    expensive syscall like clone() is way below noise floor, and
    this optimization is simply not worth the obfuscation of logic.

    [ There's also ongoing work on the clone() ABI by Josh Triplett
    that will depend on this change later on. ]

    Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com>
    Cc: Alexei Starovoitov <ast@plumgrid.com>
    Cc: Andrew Morton <akpm@linux-foundation.org>
    Cc: Andy Lutomirski <luto@amacapital.net>
    Cc: Borislav Petkov <bp@alien8.de>
    Cc: Brian Gerst <brgerst@gmail.com>
    Cc: Frederic Weisbecker <fweisbec@gmail.com>
    Cc: H. Peter Anvin <hpa@zytor.com>
    Cc: Josh Triplett <josh@joshtriplett.org>
    Cc: Kees Cook <keescook@chromium.org>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Oleg Nesterov <oleg@redhat.com>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Steven Rostedt <rostedt@goodmis.org>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Cc: Will Drewry <wad@chromium.org>
    Link: http://lkml.kernel.org/r/1433339930-20880-2-git-send-email-dvlasenk@redhat.com
    Signed-off-by: Ingo Molnar <mingo@kernel.org>
    ---
    arch/x86/entry/ia32entry.S | 13 ++++++-------
    1 file changed, 6 insertions(+), 7 deletions(-)

    diff --git a/arch/x86/entry/ia32entry.S b/arch/x86/entry/ia32entry.S
    index d0c7b28..9558dac 100644
    --- a/arch/x86/entry/ia32entry.S
    +++ b/arch/x86/entry/ia32entry.S
    @@ -529,14 +529,13 @@ GLOBAL(\label)
    GLOBAL(stub32_clone)
    leaq sys_clone(%rip), %rax
    /*
    - * 32-bit clone API is clone(..., int tls_val, int *child_tidptr).
    - * 64-bit clone API is clone(..., int *child_tidptr, int tls_val).
    - * Native 64-bit kernel's sys_clone() implements the latter.
    - * We need to swap args here. But since tls_val is in fact ignored
    - * by sys_clone(), we can get away with an assignment
    - * (arg4 = arg5) instead of a full swap:
    + * The 32-bit clone ABI is: clone(..., int tls_val, int *child_tidptr).
    + * The 64-bit clone ABI is: clone(..., int *child_tidptr, int tls_val).
    + *
    + * The native 64-bit kernel's sys_clone() implements the latter,
    + * so we need to swap arguments here before calling it:
    */
    - mov %r8, %rcx
    + xchg %r8, %rcx
    jmp ia32_ptregs_common

    ALIGN

    \
     
     \ /
      Last update: 2015-06-07 11:01    [W:6.429 / U:0.856 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site