lkml.org 
[lkml]   [2004]   [Apr]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: Using compression before encryption in device-mapper
Guillaume Lacôte wrote:

>>>Oops ! I thought it was possible to guarantee with the Huffman encoding
>>>(which is more basic than Lempev-Zif) that the compressed data use no
>>>more than 1 bit for every byte (i.e. 12,5% more space).

WTF??

Zlib gives a maximum increase of 0.1% + 12 bytes (from the zlib manual), which
for a 512 block will be a 2.4% guaranteed increase.

I think that zlib already does the "if this is bigger than original, just mark
the block type as uncompressed" algorithm internally, so the increase is minimal
in the worst case.

A while ago I started working on a proof of concept kind of thing, that was a
network block device server that compressed the data sent to it.

I think that if we want to go ahead with this, we really should make the extra
effort to have actual compression, and use the extra space.

From my experience it is possible to get "near tar.gz" compression ratios on a
scheme like this.

Biggest problems:

1 - We really need to have an extra layer of metadata. This is the worse part.
Not only makes the thing much more complex, but it brings new problems, like
making sure that the order the data is written to disk is transparent to the
upper layers and won't wreck the journal on a journaling file system.

2 - The compression layer should report a large block size upwards, and use a
little block size downwards, so that compression is as efficient as possible.
Good results are obtained with a 32kB / 512 byte ratio. This can cause extra
read-modify-write cycles upwards.

3 - If we use a fixed size partition to store compressed data, the apparent
uncompressed size of the block device would have to change. Filesystems aren't
prepared to have a block device that changes size on-the-fly. If we can solve
this problem, then this compression layer could be really useful. Otherwise it
can only be used over loopback on files that can grow in size (this could still
be useful this way).

4 - The block device has no knowledge of what blocks are actually being used by
the filesystem above, so it has to compress and store blocks that are actually
"unused". This is not an actual problem, is just that it could be a little more
efficient if it could ignore unused blocks.

When I did the tests I mounted an ext2 filesystem over the network block device.
At least with ext2, the requests were gathered so that the server would often
receive requests of 128kB, so the big block size problem is not too serious
(performance will be bad anyway, this is a clear space/speed trade-off). This
was kernel 2.4. I don't know enough about the kernel internals to know which
layer is responsible for this "gathering".

On the up side, having an extra metadata layer already provides the "is not
compressed" bit, so that we never need more than a block of disk to store one
block of uncompressed data.

Anyway, I really think that if there is no solution for problem 3, there is
little point in pushing this forward.

For a "better encryption only" scheme, we could use a much simpler scheme, like
having a number of reserved blocks on the start of the block device to hold a
bitmap of all the blocks. On this bitmap a 1 means that the block is
uncompressed, so that if, after compression, the block is bigger than the
original we can store it uncompressed.

--
Paulo Marques - www.grupopie.com
"In a world without walls and fences who needs windows and gates?"

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 14:02    [W:0.235 / U:0.124 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site