lkml.org 
[lkml]   [2011]   [May]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: Don't understand comment in arch/x86/boot/compressed/misc.c
Date
Rob Landley <rob@landley.net> writes:

> It talks about when decompression in place is safe to do:
>
> * Getting to provable safe in place decompression is hard.
> * Worst case behaviours need to be analyzed.
> ...
> * The buffer for decompression in place is the length of the
> * uncompressed data, plus a small amount extra to keep the algorithm safe.
> * The compressed data is placed at the end of the buffer. The output
> * pointer is placed at the start of the buffer and the input pointer
> * is placed where the compressed data starts. Problems will occur
> * when the output pointer overruns the input pointer.
> *
> * The output pointer can only overrun the input pointer if the input
> * pointer is moving faster than the output pointer. A condition only
> * triggered by data whose compressed form is larger than the uncompressed
> * form.
>
> You have an output pointer at a lower address catching up to an input
> pointer at a higher address. If the input pointer is moving FASTER
> than the output pointer, wouldn't the gap between them grow rather
> than shrink?

The wording might be clearer but the basic concept in context seems
fine. The entire section is talking about how many bytes more than the
uncompressed size of the data do you need to guarantee you won't overrun
your compressed data.

For gzip that is a smidge over a single compressed block.

In the worst case you have to assume that none of your blocks
actually compressed.

So an input pointer going faster than an output pointer is a problem
if you try to limit yourself to exactly the area of memory that the
decompressed data lives in. In that case the input point will
run off the end.

> The concern seems to be about COMPRESSING in place, rather than
> decompressing...?

No. In theory there is some data that when compressed will grow. In
the best case that data will grow only by a single bit.

In case a picture will help.

Decompressed data goes here Compressed data comes from here
| |
0 v-> v->
+---------------------------------------+-----+------------+
| |extra|decompressor|
+---------------------------------------+-----+------------+

The question is how large that extra chunk needs to be. This matters
either when nothing compresses (the worst case for extra space) or
especially when you get a chunk of blocks at the end that don't
compress (a plausible but almost worst case scenario).

Things have changed a bit since I wrote that analysis. The computation
of the worst case space has moved to mkpiggy.c, support for other
compression methods have been added, and we now have a mini ELF loader
in misc.c which adds an extra step to everything. But the overall
concepts remain valid.

Eric


\
 
 \ /
  Last update: 2011-05-12 02:15    [W:1.274 / U:0.100 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site