lkml.org 
[lkml]   [2011]   [May]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    SubjectRe: [PATCH 9/9] x86/lib/memset_64.S: Optimize memset by enhanced REP MOVSB/STOSB
    From
    > From: Fenghua Yu <fenghua.yu@intel.com>
    >
    > Support memset() with enhanced rep stosb. On processors supporting
    > enhanced
    > REP MOVSB/STOSB, the alternative memset_c_e function using enhanced rep
    > stosb
    > overrides the fast string alternative memset_c and the original function.

    FWIW most memsets and memcpys are generated by modern gccs as inline code,
    depending on alignment etc., so will never call your new function.
    Same may be true for memmove (not fully sure)

    One way to work around this would be to add suitable logic
    to the string.h macros and make sure the out of line code is always
    called for large copies if the count is constant and large enough.

    There used to be such logic, but it was removed partly later.

    The only problem is that it's hard to decide if the count is variable
    and where a good threshold is.

    Or maybe it would be better to just fix gcc to use the new instructions,
    but then it would be difficult to patch them in.

    -Andi




    \
     
     \ /
      Last update: 2011-05-18 04:59    [W:0.024 / U:31.264 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site