Messages in this thread | | | Date | Thu, 16 Aug 2012 19:52:34 +0200 | From | Andi Kleen <> | Subject | Re: [GIT PULL] Update LZO compression |
| |
> I have locked the Allwinner A10 CPU in my Mele A2000 to 60 MHz using cpufreq-set, > and ran your test. rnd.lzo is a 9 MB file from /dev/urandom compressed with lzo. > There doesn't seem to be a significant difference between all three variants.
I found that in compression benchmarks it depends a lot on the data compressed.
urandom (which should be essentially incompressible) will be handled by different code paths in the compressor than other more compressible data. It becomes a complicated memcpy then.
But then there are IO benchmarks which also only do zeroes, which also gives an unrealistic picture.
Usually it's best to use some corpus with different data types, from very compressible to less so; and look at the aggregate.
For my snappy work I usually had at least large executables (medium) and some pdfs (already compressed; low) and then uncompressed source code tars (high compression)
-Andi
| |