lkml.org 
[lkml]   [2015]   [Jun]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH 2/2] x86/asm/entry/32: Remove unnecessary optimization in stub32_clone
Date
Really swap arguments #4 and #5 in stub32_clone instead of "optimizing"
it into a move.

Yes, tls_val is currently unused. Yes, on some CPUs XCHG is a little bit
more expensive than MOV. But a cycle or two on an expensive syscall like
clone() is way below noise floor, and this optimization is simply not worth
the obfuscation of logic.

Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com>
CC: Josh Triplett <josh@joshtriplett.org>
CC: Linus Torvalds <torvalds@linux-foundation.org>
CC: Steven Rostedt <rostedt@goodmis.org>
CC: Ingo Molnar <mingo@kernel.org>
CC: Borislav Petkov <bp@alien8.de>
CC: "H. Peter Anvin" <hpa@zytor.com>
CC: Andy Lutomirski <luto@amacapital.net>
CC: Oleg Nesterov <oleg@redhat.com>
CC: Frederic Weisbecker <fweisbec@gmail.com>
CC: Alexei Starovoitov <ast@plumgrid.com>
CC: Will Drewry <wad@chromium.org>
CC: Kees Cook <keescook@chromium.org>
CC: x86@kernel.org
CC: linux-kernel@vger.kernel.org
---

This is a resend.

There was a patch by Josh Triplett
"x86: Opt into HAVE_COPY_THREAD_TLS, for both 32-bit and 64-bit"
sent on May 11,
which does the same thing as part of a bigger cleanup.
He was supportive of this patch because of comments.
He will simply have to drop one hunk from his patch.

arch/x86/ia32/ia32entry.S | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/arch/x86/ia32/ia32entry.S b/arch/x86/ia32/ia32entry.S
index 8e72256..0c302d0 100644
--- a/arch/x86/ia32/ia32entry.S
+++ b/arch/x86/ia32/ia32entry.S
@@ -567,11 +567,9 @@ GLOBAL(stub32_clone)
* 32-bit clone API is clone(..., int tls_val, int *child_tidptr).
* 64-bit clone API is clone(..., int *child_tidptr, int tls_val).
* Native 64-bit kernel's sys_clone() implements the latter.
- * We need to swap args here. But since tls_val is in fact ignored
- * by sys_clone(), we can get away with an assignment
- * (arg4 = arg5) instead of a full swap:
+ * We need to swap args here:
*/
- mov %r8, %rcx
+ xchg %r8, %rcx
jmp ia32_ptregs_common

ALIGN
--
1.8.1.4


\
 
 \ /
  Last update: 2015-06-03 16:21    [W:0.064 / U:0.736 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site