lkml.org 
[lkml]   [2009]   [Apr]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH 1/2] lib: add fast lzo decompressor
From
Date
Hi.

On Fri, 2009-04-03 at 12:54 +0200, Andreas Robinson wrote:
> The LZO compressor can produce more bytes than it consumes but here the
> output buffer is the same size as the input.
> This macro in linux/lzo.h defines how big the buffer needs to be:
> #define lzo1x_worst_compress(x) ((x) + ((x) / 16) + 64 + 3)

Okay. Am I right in thinking (from staring at the code) that the
compression algo just assumes it has an output buffer big enough? (I
don't see it checking out_len, only writing to it). If that's the case,
I guess I need to (ideally) persuade the cryptoapi guys to extend the
api so you can find out how big an output buffer is needed for a
particular compression algorithm - or learn how they've already done
that (though it doesn't look like it to me).

> If there are multiple threads perhaps they clobber each other's output
> buffers?

Nope. The output buffers you see here are fed to the next part of the
pipeline (the block I/O code), which combines them (under a mutex) into
a stream of |index|size|data|index|size|data... so that we don't have to
worry at all about which processor compressed (or decompresses data
later). As I said earlier, it's worked fine with LZF - or no compression
- for years. It's just LZO that causes me problems.

Thanks!

Nigel



\
 
 \ /
  Last update: 2009-04-03 13:51    [W:0.083 / U:0.196 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site