Messages in this thread | | | From | Noah Goldstein <> | Date | Fri, 10 Dec 2021 12:35:48 -0600 | Subject | Re: [PATCH v4] arch/x86: Improve 'rep movs{b|q}' usage in memmove_64.S |
| |
On Fri, Nov 19, 2021 at 6:05 PM Noah Goldstein <goldstein.w.n@gmail.com> wrote: > > On Fri, Nov 19, 2021 at 4:31 PM David Laight <David.Laight@aculab.com> wrote: > > > > From: Noah Goldstein > > > Sent: 17 November 2021 22:45 > > > > > > On Wed, Nov 17, 2021 at 4:31 PM David Laight <David.Laight@aculab.com> wrote: > > > > > > > > From: Noah Goldstein > > > > > Sent: 17 November 2021 21:03 > > > > > > > > > > Add check for "short distance movsb" for forwards FSRM usage and > > > > > entirely remove backwards 'rep movsq'. Both of these usages hit "slow > > > > > modes" that are an order of magnitude slower than usual. > > > > > > > > > > 'rep movsb' has some noticeable VERY slow modes that the current > > > > > implementation is either 1) not checking for or 2) intentionally > > > > > using. > > > > > > > > How does this relate to the decision that glibc made a few years > > > > ago to use backwards 'rep movs' for non-overlapping copies? > > > > > > GLIBC doesn't use backwards `rep movs`. Since the regions are > > > non-overlapping it just uses forward copy. Backwards `rep movs` is > > > from setting the direction flag (`std`) and is a very slow byte > > > copy. For overlapping regions where backwards copy is necessary GLIBC > > > uses 4x vec copy loop. > > > > Try to find this commit 6fb8cbcb58a29fff73eb2101b34caa19a7f88eba > > > > Or follow links from https://www.win.tue.nl/~aeb/linux/misc/gcc-semibug.html > > But I can't find the actual patch. > > > > The claims were a massive performance increase for the reverse copy. > > > > I don't think that's referring to optimizations around `rep movs`. It > appears to be referring to fallout from this patch: > https://sourceware.org/git/?p=glibc.git;a=commit;h=6fb8cbcb58a29fff73eb2101b34caa19a7f88eba > > which broken programs misusing `memcpy` with overlapping regions > resulting in this fix: > https://sourceware.org/git/?p=glibc.git;a=commit;h=0354e355014b7bfda32622e0255399d859862fcd > > AFAICT support for ERMS was only added around: > https://sourceware.org/git/?p=glibc.git;a=commit;h=13efa86ece61bf84daca50cab30db1b0902fe2db > > Either way GLIBC memcpy/memmove moment most certainly does not > use backwards `rep movs`: > https://sourceware.org/git/?p=glibc.git;a=blob;f=sysdeps/x86_64/multiarch/memmove-vec-unaligned-erms.S;hb=HEAD#l655 > > as it is very slow. > > > The pdf from www.agner.org/optimize may well indicate why some > > copies are unexpectedly slow due to cache access aliasing. > > Even in the `4k` aliasing case `rep movsb` seems to stay within a > factor of 2 of optimal whereas the `std` backwards `rep movs` loses > by a factor of 10. > > Either way, `4k` aliasing detection is mostly a concern of `memcpy` as > the direction of copy for `memmove` is a correctness question, not > an optimization. > > > > > > I'm pretty sure that Intel cpu (possibly from Ivy bridge onwards) > > can be persuaded to copy 8 bytes/clock for in-cache data with > > a fairly simple loop that contains 2 reads (maybe misaligned) > > and two writes (so 16 bytes per iteration). > > Extra unrolling just adds extra code top and bottom. > > > > You might want a loop like: > > 1: mov 0(%rsi, %rcx),%rax > > mov 8(%rsi, %rcx),%rdx > > mov %rax, 0(%rdi, %rcx) > > mov %rdx, 8(%rdi, %rcx) > > add $16, %rcx > > jnz 1b > > > > David > > The backwards loop already has 4x unrolled `movq` loop. ping. > > > > > - > > Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK > > Registration No: 1397386 (Wales)
| |