Messages in this thread | | | Date | Thu, 31 Aug 2023 13:58:02 -0700 | From | Kees Cook <> | Subject | Re: [PATCH] pstore: Base compression input buffer size on estimated compressed size |
| |
On Wed, Aug 30, 2023 at 11:22:38PM +0200, Ard Biesheuvel wrote: > So let's fix both issues, by bringing back the typical case estimation of > how much ASCII text captured from the dmesg log might fit into a pstore > record of a given size after compression. The original implementation > used the computation given below for zlib, and so simply taking 2x as a > ballpark number seems appropriate here. > > switch (size) { > /* buffer range for efivars */ > case 1000 ... 2000: > cmpr = 56; > break; > case 2001 ... 3000: > cmpr = 54; > break; > case 3001 ... 3999: > cmpr = 52; > break; > /* buffer range for nvram, erst */ > case 4000 ... 10000: > cmpr = 45; > break; > default: > cmpr = 60; > break; > } > > return (size * 100) / cmpr;
I remained suspicious of this since the old worst-case was 60%, not 50%... In testing with some instrumentation I was able to find compression failures (see the "-22" results in the middle):
pstore: backend max size:1024 dump size:2034 zipped size:800 pstore: backend max size:1024 dump size:1943 zipped size:714 pstore: backend max size:1024 dump size:2008 zipped size:739 pstore: backend max size:1024 dump size:2024 zipped size:722 pstore: backend max size:1024 dump size:2017 zipped size:926 pstore: backend max size:1024 dump size:2046 zipped size:-22 pstore: backend max size:1024 dump size:2046 zipped size:-22 pstore: backend max size:1024 dump size:2007 zipped size:890 pstore: backend max size:1024 dump size:2035 zipped size:830 pstore: backend max size:1024 dump size:2012 zipped size:844 pstore: backend max size:1024 dump size:1978 zipped size:823 pstore: backend max size:1024 dump size:2013 zipped size:543 pstore: backend max size:1024 dump size:2000 zipped size:820
So, I altered the patch slightly to use the 60% worst-case (i.e. an underestimate), and that did the trick (you can see the smaller "dump size" output from the kmsg dumper):
pstore: backend max size:1024 dump size:1590 zipped size:553 pstore: backend max size:1024 dump size:1534 zipped size:792 pstore: backend max size:1024 dump size:1647 zipped size:414 pstore: backend max size:1024 dump size:1641 zipped size:599 pstore: backend max size:1024 dump size:1670 zipped size:643 pstore: backend max size:1024 dump size:1692 zipped size:684 pstore: backend max size:1024 dump size:1697 zipped size:934 pstore: backend max size:1024 dump size:1696 zipped size:870 pstore: backend max size:1024 dump size:1677 zipped size:791 pstore: backend max size:1024 dump size:1683 zipped size:772 pstore: backend max size:1024 dump size:1677 zipped size:742 pstore: backend max size:1024 dump size:1704 zipped size:714 pstore: backend max size:1024 dump size:1683 zipped size:715 pstore: backend max size:1024 dump size:1693 zipped size:479 pstore: backend max size:1024 dump size:1667 zipped size:487 pstore: backend max size:1024 dump size:1639 zipped size:760
However, we still need a different _decompression_ size, as we want to over-estimate that buffer size. I just used 3x which is going always be enough.
I'll send a v2 to see what you think...
-- Kees Cook
| |