Messages in this thread | | | Date | Tue, 23 May 2006 18:35:37 +0200 (CEST) | From | Nuri Jawad <> | Subject | Re: Linux Kernel Source Compression |
| |
On Tue, 23 May 2006, Julian Seward wrote:
> It uses an adaptive huffman scheme devised by David Wheeler, which usually > gets within 1% of the arithmetic coder that bzip1 used.
If that coder has patent issues, it shouldn't be used, of course, regardless of performance.
> bzip2, especially the 1.0.X series, is superior to bzip1 in terms of speed, > memory use, robustness against bad-case inputs, recoverability of data > from damaged compressed streams, and that it can be used as a library.
Superior in most aspects, yes, but not regarding compression ratio. Anyway, calling bzip2 a step backwards was a bit of provocation and not really meant seriously, but it does have slightly reduced compression ratio.
Maybe bzip2 could be updated to make more use of today's fast CPUs? Much larger dictionary or other computationally expensive improvements.
Regards, Nuri - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
| |