Messages in this thread Patch in this message | | | Date | Thu, 26 Jan 2012 15:55:32 +0000 | From | "Jan Beulich" <> | Subject | [PATCH] x86-64: handle byte-wise tail copying in memcpy() without a loop |
| |
While hard to measure, reducing the number of possibly/likely mis-predicted branches can generally be expected to be slightly better.
Other than apparent at the first glance, this also doesn't grow the function size (the alignment gap to the next function just gets smaller).
Signed-off-by: Jan Beulich <jbeulich@suse.com>
--- arch/x86/lib/memcpy_64.S | 19 ++++++++++--------- 1 file changed, 10 insertions(+), 9 deletions(-)
--- 3.3-rc1/arch/x86/lib/memcpy_64.S +++ 3.3-rc1-x86_64-memcpy-tail/arch/x86/lib/memcpy_64.S @@ -169,18 +169,19 @@ ENTRY(memcpy) retq .p2align 4 .Lless_3bytes: - cmpl $0, %edx - je .Lend + subl $1, %edx + jb .Lend /* * Move data from 1 bytes to 3 bytes. */ -.Lloop_1: - movb (%rsi), %r8b - movb %r8b, (%rdi) - incq %rdi - incq %rsi - decl %edx - jnz .Lloop_1 + movzbl (%rsi), %ecx + jz .Lstore_1byte + movzbq 1(%rsi), %r8 + movzbq (%rsi, %rdx), %r9 + movb %r8b, 1(%rdi) + movb %r9b, (%rdi, %rdx) +.Lstore_1byte: + movb %cl, (%rdi) .Lend: retq
| |