[lkml]   [2003]   [Oct]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
    SubjectRe: Transparent compression in the FS
    Quote from Larry McVoy <>:
    > On Wed, Oct 15, 2003 at 11:13:27AM -0400, Jeff Garzik wrote:
    > > Josh and others should take a look at Plan9's venti file storage method
    > > -- archival storage is a series of unordered blocks, all of which are
    > > indexed by the sha1 hash of their contents. This magically coalesces
    > > all duplicate blocks by its very nature, including the loooooong runs of
    > > zeroes that you'll find in many filesystems. I bet savings on "all
    > > bytes in this block are zero" are worth a bunch right there.
    > The only problem with this is that you can get false positives. Val Hensen
    > recently wrote a paper about this. It's really unlikely that you get false
    > positives but it can happen and it has happened in the field.

    Surely it's just common sense to say that you have to verify the whole
    block - any algorithm that can compress N values into <N values is
    lossy by definition. A mathematical proof for that is easy.

    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to
    More majordomo info at
    Please read the FAQ at

     \ /
      Last update: 2005-03-22 13:58    [W:0.044 / U:54.176 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site