Messages in this thread | | | From | (Linus Torvalds) | Subject | Re: fsync on large files | Date | 18 Feb 1999 19:00:06 GMT |
| |
In article <Pine.LNX.3.96.990218093226.26524A-100000@sasami.anime.net>, Dan Hollis <goemon@sasami.anime.net> wrote: >On Thu, 18 Feb 1999, Richard Jones wrote: >> About the *only* problem with ext2 right now is the long fsck times. > >No. The fact you can lose data after a crash is. The long fsck times is >a secondary concern. Data integrity is the *primary* concern.
Journaling doesn't help - journaling essentially only ensures metadata consistency, it doesn't enforce actual file contents are "up-to-date" (for example, you might have part of a write visible, but not necessarily all).
If you want to have true data consistency, you need to have a database kind of filesystem, with true transactions (a logbased filesystem or a filesystem that is clever with new block allocations and write ordering).
>A jfs would eliminate this entirely, plus fix the long fsck times.
No. A regular jfs only gets rid of the fsck (or rather - makes it much faster), and guarantees that the filesystem _layout_ (as opposed to the contents of the files) is consistent and correct. But you need to do something extra on top of it to give any other guarantees (that's why you have mailers that have extra lockfiles etc that use the metadata consistency as a inter-transaction boundary).
And even for a jfs (or a logging filesystem), you can still have _true_ disk corruption screwing you. But that you can use RAID drives to solve, so there at least the solution is easy - but without RAID drives you may well want to have a full, forced, fsck even for journalling filesystems.
Linus
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/
| |