Messages in this thread | | | Subject | Re: [PATCH 1/2] lib: add fast lzo decompressor | From | Nigel Cunningham <> | Date | Fri, 03 Apr 2009 22:48:14 +1100 |
| |
Hi.
On Fri, 2009-04-03 at 12:54 +0200, Andreas Robinson wrote: > The LZO compressor can produce more bytes than it consumes but here the > output buffer is the same size as the input. > This macro in linux/lzo.h defines how big the buffer needs to be: > #define lzo1x_worst_compress(x) ((x) + ((x) / 16) + 64 + 3)
Okay. Am I right in thinking (from staring at the code) that the compression algo just assumes it has an output buffer big enough? (I don't see it checking out_len, only writing to it). If that's the case, I guess I need to (ideally) persuade the cryptoapi guys to extend the api so you can find out how big an output buffer is needed for a particular compression algorithm - or learn how they've already done that (though it doesn't look like it to me).
> If there are multiple threads perhaps they clobber each other's output > buffers?
Nope. The output buffers you see here are fed to the next part of the pipeline (the block I/O code), which combines them (under a mutex) into a stream of |index|size|data|index|size|data... so that we don't have to worry at all about which processor compressed (or decompresses data later). As I said earlier, it's worked fine with LZF - or no compression - for years. It's just LZO that causes me problems.
Thanks!
Nigel
| |