Messages in this thread | | | Date | Wed, 17 Feb 1999 10:09:57 -0800 (PST) | From | Linus Torvalds <> | Subject | Re: fsync on large files |
| |
On Wed, 17 Feb 1999, Stephen C. Tweedie wrote: > > The atomic write behaviour is a by-product of fixing an > entirely different problem (concurrent truncate, for which making it > atomic does make a lot of sense).
No. The atomic write behaviour was implemented because we couldn't come up with any other reasonable scheme to handle truncation sanely, but that was after we had extended tons of effort on making a "normal" write do the right thing.
I believe that we should just rip out the code that tried to handle concurrent normal writes, and know they can't happen.
For concurrent writes, it may be entirely acceptable to completely _drop_ the write semaphore at well-defined places and retry, but that's going to be a per-filesystem thing: the VFS layer is going to basically grab the semaphore and default to complete atomicity. That way it's up to the low-level filesystem to handle concurrency, if the filesystem writer is 100% sure he knows what he is doing.
We had exactly this problem with block device drivers, where the only sane way to handle it is to let the upper levels do the locking so that the lower levels don't have to know (and more importantly: that way we can _change_ the locking setup without changing every single driver or filesystem - for example, the device driver lock is a global thing, but can be made into a per-device thing without having to completely rewrite all drivers).
Linus
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/
| |