[lkml]   [2000]   [Mar]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
    SubjectRe: [OT] Microsoft invents symbolic links
    Ville Herva <> writes:

    > > From: Pavel Machek <>
    > > Date: Thu, 2 Mar 2000 23:40:32 +0100
    > > Subject: Re: [OT] Microsoft invents symbolic links
    > >
    > > Actually, 40% of my disk capacity is wasted in duplicates. Why? I do cp
    > > -a linux linux.backup before major changes. Automagicall ways to get
    > I can't even guess what the ratio is on a small companys (like ours) file
    > server filled with old source versions and documents copies.

    ever tryed cvs ?

    > > space back would be nice. (I also cp -a package ofic.package, so that I
    > > can diff -ur later... Hardlinks are not enough because I do not want to
    > > accidentaly trash ofic.)
    > >
    > > So, I'd actually like cow-link. cp -a --cow-link mc would be
    > > very usefull for me.
    > I second that. We do incremantal backups with
    > cp -al yesterdays-backup/ todays-backup/
    > rsync --archive --hard-links --whole-file --delete /backed-up-dir todays-backup
    > Of course, that works just fine (with very good disk space usage, since we
    > also use e2compr). But with cow-links, you could backup your working
    > (source, document, image) directory and not worry about disk space usage
    > or your editor creating new inode on save.
    > On file servers there definetely are a lot of duplicates. Text documents,
    > source, images tend to be duplicated - people have their personal copies.
    > People hacking on source have multiple copies of it.
    > You could have a cowlinkd running nightly on file server and finding and
    > cow-linking those duplicates. (It could e2compr less used files as well).

    yes, and introduce overhead in the kernel because each time you copy /
    modify a file, it would have to verify if it is a 'cow link';
    and if it is it'll end up moving from a "cow link" to a normal file.

    > Of course, to get the most out of the redundancy in, say, document or
    > source tree, the cow-scheme would have to be on block level rather than
    > inode level (as it is page level in vm). I suppose that just wouldn't even
    > theoretically be affortable, since it would propably take per-block
    > reference counting.

    -- Yoann
    It is well known that M$ product don't make a free() after a malloc(),
    the unix community wish them good luck for their future developement.

    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to
    Please read the FAQ at

     \ /
      Last update: 2005-03-22 13:56    [W:0.028 / U:109.140 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site