lkml.org 
[lkml]   [2001]   [Oct]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: e2compress in kernel 2.4
Would it not be better to do all the (de)compression @ the page cache
level (http://linuxcompressed.sourceforge.net/) and then you get other
advantages.
Then you would just use the compression bit in ext2 to mark blocks that
should not be decompressed before passing down towards the disk and vive
versa.
Note also since ramfs uses the page cache directly it would get transparent
compression for free?

Padraig.

Pierre PEIFFER wrote:

>Hi !
>
> We are willing to port e2compress from 2.2 kernel series to 2.4 and
>we are looking for the right way for porting the compression on the
>write part.
>
> For the read operation, we can adapt the original design: the 2.2
>part of e2compress can be easily integrated in the 2.4 version; for the
>write, it is a little bit more complicated...
>
> As we understand, in the 2.2 kernel, the compression is integrated
>between the page cache and the buffer cache, i.e. data pointed by the
>pages remain always uncompressed, but the compression occurs on buffers
>=> data pointed by the buffers become compressed when the system decide
>to.
> What we also saw is that in 2.2, in ext2_file_write, the writes
>occurs on buffers, and after that, the system looks for the
>corresponding page, and if it is present, it also update the data in
>this page.
>
> But, under 2.4, as we see in the "generic_file_write", the write
>operation occurs on pages, and no more on buffers as in 2.2. And the
>needed buffers are created and associated to the page, i.e. the b_data
>field of the buffers points on the data of the considered page.
>
> So, here, we are a little bit confused because we don't know where
>to introduce the compression, if we keep the same idea of the 2.2
>design... In fact, on one hand, once the buffers will be compressed, the
>pages will also become compressed, but on the other hand, we don't want
>the pages to be compressed, because, the pages, once registered and
>linked to the inode are supposed to be uncompressed...
>
> So our idea was to introduce the notion of "cluster of pages", as
>the notion of cluster of blocks, i.e. performs the write on several
>pages at a time, then compress the buffers corresponding to these pages,
>but here the data of the buffers should be splitted up from the data of
>the pages and that's our problem... We don't know how to do this. Is
>there a way to do this ?
>
> And, from a more general point of view, do you think our approach
>has a chance to succeed ?
>
> If you have any questions, feel free to ask more explainations.
>
> Thanks,
>
> Pierre & Denis
>
>PS: Please, cc'ed me personnaly in the answer, I'm not subscribed to the
>list.
>


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:03    [W:0.643 / U:0.352 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site