[lkml]   [1997]   [Sep]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
    SubjectRe: safe file systems

    > Do you think it would be possible to build a safe, slow file system?

    Yes. It does not need to be slow if ordered writes are added to the

    > By safe, I mean that I could hit reset in the middle of 50 parallel
    > un-tars and reboot the system and the file system comes up clean (no fsck,
    > but data loss)?

    In this specific case (ie, you want to preserve the file system
    integrity, but do not care if you loose the information), adding
    metadata logging to the ext2 file system would be the easier thing to

    The only requirement is that the metadata logging information should
    reach the disk before the actual metadata changes. Currently this is
    not supported by the Linux kernel, but the new driver structure from
    Thomas should provide a good framework for doing this.

    If you do not mind having a slow file system, adding this would not
    require the new driver framework, you just need to force a syncrounous
    write of the metadata log block to the disk before marking the
    actual modified metadata block dirty in ext2.

    > Has anyone thought about this very much? If so, is there a mailing list or
    > archive that I can browse?

    I have been thinking about implement this for some time now, and have
    read some bits about this, and I even was considering implementing the
    slow approach yesterday (ie, not depending on Thomas' new driver
    framework), but had to leave early the office.


     \ /
      Last update: 2005-03-22 13:40    [W:0.020 / U:147.024 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site