Messages in this thread | | | From | Jinpu Wang <> | Date | Wed, 10 Oct 2018 09:33:05 +0200 | Subject | Re: [PATCH] lib: memcmp optimization |
| |
On Wed, Oct 10, 2018 at 1:13 AM Andrew Morton <akpm@linux-foundation.org> wrote: > > On Tue, 9 Oct 2018 16:28:11 +0200 Jack Wang <xjtuwjp@gmail.com> wrote: > > > From: Florian-Ewald Mueller <florian-ewald.mueller@profitbricks.com> > > > > During testing, I have configured 128 md/raid1's and, while under > > heavy IO, I started a check on each of them > > (echo check > /sys/block/mdx/md/sync_action). > > > > The CPU utilization went through the ceiling and when looking for > > the cause (with 'perf top'). I've discovered that ~50% of the time > > was spend in memcmp() called from process_checks(). > > > > With this patch applied, it drops to 4% - 10%. > > Which CPU architecture? Most important architectures appear to define > __HAVE_ARCH_MEMCMP. x86_64 is our case, I should have documented it more clearly. > > > --- a/lib/string.c > > +++ b/lib/string.c > > @@ -852,7 +852,7 @@ EXPORT_SYMBOL(memmove); > > * @count: The size of the area. > > */ > > #undef memcmp > > -__visible int memcmp(const void *cs, const void *ct, size_t count) > > +static inline int __memcmp(const void *cs, const void *ct, size_t count) > > What the heck does __visible do? > > > { > > const unsigned char *su1, *su2; > > int res = 0; > > @@ -862,6 +862,20 @@ __visible int memcmp(const void *cs, const void *ct, size_t count) > > break; > > return res; > > } > > +__visible int memcmp(const void *cs, const void *ct, size_t count) > > +{ > > + const uint64_t *l1p = cs; > > + const uint64_t *l2p = ct; > > + > > + while (count >= sizeof(*l1p)) { > > + if (*l1p != *l2p) > > + return __memcmp(l1p, l2p, sizeof(*l1p)); > > + count -= sizeof(*l1p); > > + ++l1p; > > + ++l2p; > > + } > > + return __memcmp(l1p, l2p, count); > > +} > > This is going to do bad things if the incoming addresses aren't > suitably aligned. > Right, as you said, this is easy to fix. > Certainly, byte-at-a-time is a pretty lame implementation when the > addresses are suitably aligned. A fallback to the lame version when > there is misalignment will be simple to do. And presumably there will > be decent benefits to whoever is actually using this code. But I'm > wondering who is actually using this code! > I found recent discussion about why x86-64 is using generic string function here: +cc John, x86 https://lkml.org/lkml/2018/10/3/818
-- Jack Wang Linux Kernel Developer
ProfitBricks GmbH Greifswalder Str. 207 D - 10405 Berlin
Tel: +49 30 577 008 042 Fax: +49 30 577 008 299 Email: jinpu.wang@profitbricks.com URL: https://www.profitbricks.de
Sitz der Gesellschaft: Berlin Registergericht: Amtsgericht Charlottenburg, HRB 125506 B Geschäftsführer: Achim Weiss, Matthias Steinberg, Christoph Steffens
| |