Messages in this thread | | | Subject | Re: [PATCH] [Request for inclusion] Filesystem in Userspace | From | Miklos Szeredi <> | Date | Wed, 15 Dec 2004 22:49:44 +0100 |
| |
> >No partitioning is needed. If fuse doesn't consume too much memory > >for dirty data buffers that memory is free to use for other purposes. > > > >But fuse would be limited in the number of pages which it can use for > >dirty buffers exactly to prevent it from causing OOM. > > > > > yes, that will work. wil need to be extra-careful when one fuse is > loopback-mounted on another.
Since loopback doesn't use shared mapping (I think it uses sendfile) this isn't a problem for fuse: all non-mmap writes are synchronous, there'll be no dirty pages (only locked ones), so any allocation by the userspace filesytem won't deadlock on page reclaim.
I think the solution to the writable mmap problem is also to make those writes "quasi-synchronous", by not letting too many pages to get dirty.
> I'm concerned that you're duplicating all the accounting done currently, > and all of the writeback logic that is dependent on the number of dirty > pages.
Maybe it can be done without duplication. Since all the memory reclaim code is based on the "zone" concept, maybe this can be reused. I don't know how far-feched is the idea of "virtual zones" which borrow pages from the physical zones, but have their own limits.
> an additional concern is a fuse/non-fuse mix - how do you balance them out?
I wouldn't want to reserve any pages for fuse, only limit the number of dirtiable pages. That means that if all pages are used up for non-fuse purposes, that's OK.
> I'm no mmap expert. but doesn't writing to a mmaped page have to > increase your dirty counter somehow?
For ramfs/tmpfs it doesn't. But I'm not saying that this is the solution for fuse. This was a purely theoretical idea.
Thanks, Miklos
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
| |