lkml.org 
[lkml]   [2011]   [Aug]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: x86 memcpy performance
On Sun, Aug 14, 2011 at 2:40 PM, Borislav Petkov <bp@alien8.de> wrote:
>> > +   if (__len >= 512)                                       \
>> > +           __ret = __sse_memcpy((dst), (src), __len);      \
>> > +   else                                                    \
>> > +           __ret = __memcpy((dst), (src), __len);          \
>> > +   __ret;                                                  \
>> > +})
>>
>> Please, no. Do not inline every memcpy invocation.
>> This is pure bloat (comsidering how many memcpy calls there are)
>> and it doesn't even win anything in speed, since there will be
>> a fucntion call either way.
>> Put the __len >= 512 check inside your memcpy instead.
>
> In the __len < 512 case, this would actually cause two function calls,
> actually: once the __sse_memcpy and then the __memcpy one.

You didn't notice the "else".

>> You may do the check if you know that __len is constant:
>> if (__builtin_constant_p(__len) && __len >= 512) ...
>> because in this case gcc will evaluate it at compile-time.
>
> That could justify the bloat at least partially.

There will be no bloat in this case.
--
vda
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2011-08-15 15:47    [W:0.035 / U:1.088 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site