Messages in this thread | | | Date | Wed, 9 Feb 2000 20:21:17 +0000 (GMT) | From | Riley Williams <> | Subject | Re: 2.4 Features |
| |
Hi Peter.
>>> Thinking about the implications of an indirection layer, I don't >>> see any. E2compr doesn't use more than a real physical page of >>> memory at a time?
>> Yes, it does. And that's where the pain comes. File is divided >> into groups of pages and every group is compressed separately.
> Surely this is compression algorithm dependent! AFAIR all the > compression algorithms avaiolable in e2compr are block > compression algorithms. Plenty of them can be blocked 4K at a > time.
They all can be - or 8K at a time on processors with 8K pages.
> If I remember even more correctly, that is one of the default > choices (and if I remember even more and more correctly, I > usually change it to 32K to get better ratios, since gzip is > fast on decompress, and my cpu is faster on compress than the > disk is on write).
That is indeed one of the compile-time options, and the current default is 32k blocks.
> If the above is right, then all one has to do for the moment is > disable e2compr compression blocks of more than PAGESIZE.
That would be one way of dealing with the problem.
Best wishes from Riley.
* Copyright (C) 1999, Memory Alpha Systems. * All rights and wrongs reserved.
+----------------------------------------------------------------------+ | There is something frustrating about the quality and speed of Linux | | development, ie., the quality is too high and the speed is too high, | | in other words, I can implement this XXXX feature, but I bet someone | | else has already done so and is just about to release their patch. | +----------------------------------------------------------------------+ * http://www.memalpha.cx/Linux/Kernel/
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/
| |