Messages in this thread |  | | Subject | Re: NFS locking bug -- limited mtime resolution means nfs_lock() does not provide coherency guarantee | From | Trond Myklebust <> | Date | 18 Sep 2000 20:51:59 +0200 |
| |
>>>>> " " == Michael Eisler <mre@Zambeel.com> writes:
> What if someone has written to multiple, non-contiguous regions > of a page?
This has been foreseen and hence we only allow 1 contiguous region per page. If somebody tries to schedule a write to a second region that is not contiguous with the first, then we flush out the already scheduled write first, and wait on it to complete.
We also only allow 1 process at a time to schedule writes to a given page. This is done in case the file access permissions turn out to be untrustworthy: (due to e.g. root squashing and the likes) it means that we don't end up clobbering the data from another process' pending write.
> So, assume a page is 4096 bytes long, and there are 2048 > processes that each have write locked the even numbered bytes > of the first page of a file. Once all the locks have been > acquired, the page has been flushed to disk.
> Now we have a clean page. Each process then writes only one > byte region that it has locked.
> Now they unlock the region.
> What happens?
> (btw, on another nfs client, 2048 processes have locked the odd > number bytes of the first page of the same file).
> If you claim that first client doesn't overwrite the updates > from the 2nd client, then I'll shut up.
Blech... It'll be slow, but as I said, only 1 contiguous region is allowed to have pending writes on any given page at any given point in time. Basically, you'll end up with 2048 byte-sized regions getting written out from each machine. Each time a new write gets scheduled, it will flush out the previous write and wait on its completion.
In your example, we will therefore in practice be doing completely synchronous writes...
Cheers, Trond - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org Please read the FAQ at http://www.tux.org/lkml/
|  |