lkml.org 
[lkml]   [1999]   [Nov]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [patch] string.h speedup, cld-2.3.30-A1
Ingo Molnar wrote:
>
> x86 GCC and string.h generates lots of 'cld' instructions for string /
> memory copies. My 2.3.30-pre2 kernel contains 973 cld instructions, often
> in hot paths. Most of these cld's are unnecessery, and just add a 6 cycle
> overhead and pollute the icache. This is a problem which has been long
> known, but not addressed due to various subtle dangers related to the
> change.

note also that a lot of mem copies were already _not_ shielded with
clds, eg all copy_user() and friends.

> but all Linux entry and 'untrusted exit' points already do a 'cld', so the
> change would be safe, conceptually.


> future exit/entry points will have to do a 'cld' when returning from
> untrusted domains, just like they have to restore segment registers.

this should be explicitly documented somewhere in Documentation/
(it applies also to third party kernel code, ie modules).

> much of the remaining 284 cld's is still unjustified, because '*a = *b;'
> type of structure copies are inlined by GCC. (GCC generates a cld because
> user-space has to be prepared to be interrupted by uncooperative signal
> handlers and the like). So i added a cpy(x,y) macro which copies one
> structure into another via memcpy. We may not want to do it this way
> though, if it's too ugly. I have not found any way to prevent GCC from
> using it's own memcpy function in the structure copy case.

first, namespace polution, eg copy_struct() would be safer and less
conflict prone.
second, gcc seems to generate "better" code for the builtins than for
the inlined asms -- probably due to more optimal reg allocation (there's
room for improvement in string.h's routines; i played with that, but after
seeing incorrect code being generated (afaicr having two identical inputs
with only one of them marked as clobbered was the trigger) i gave up and
used the builtin). Note "better" doesn't necessarily == faster -- i didn't
benchmark them.
third, gcc unconditionally generating clds everywhere is a compiler
issue, which should be fixed in compiler land. something like
-mno-implicit-clds. (working around the problem by disabling gcc
optimizations isn't an optimal solution)


> the patch works just fine here, and i'm reasonably sure that no entry/exit
> point is missing. (i ran testcode for some time which does 'std' and 'cld'
> and thus tests the IRQ handler entry code and the syscall entry code. No
> problems whatsoever, as expected.)

as an additional datapoint - i didn't see any problems either, during
the last couple of months that i ran w/o (most of) the clds.


@@ -32,7 +32,6 @@
{
int d0, d1, d2;
__asm__ __volatile__(
- "cld\n"
"1:\tlodsb\n\t"
"stosb\n\t"
"testb %%al,%%al\n\t"

these changes are visible to userland; either your previous
/cld-ifdef-KERNEL approach/ or simply /#ifdef-KERNEL the whole header/
would probably be safer.


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:55    [W:0.092 / U:0.348 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site