Messages in this thread | | | Date | Thu, 12 Mar 2009 12:43:23 -0700 | From | Andrew Morton <> | Subject | Re: [PATCH] NOMMU: Pages allocated to a ramfs inode's pagecache may get wrongly discarded |
| |
On Thu, 12 Mar 2009 12:25:24 +0000 David Howells <dhowells@redhat.com> wrote:
> Andrew Morton <akpm@linux-foundation.org> wrote: > > > Was there a specific reason for using the low-level SetPageDirty()? > > > > On the write() path, ramfs pages will be dirtied by > > simple_commit_write()'s set_page_dirty(), which calls > > __set_page_dirty_no_writeback(). > > > > It just so happens that __set_page_dirty_no_writeback() is equivalent > > to a simple SetPageDirty() - it bypasses all the extra things which we > > do for normal permanent-storage-backed pages. > > > > But I'd have thought that it would be cleaner and more maintainable (albeit > > a bit slower) to go through the a_ops? > > It basically boils down to SetPageDirty() with extra overhead, which you > pointed out. We're manually manipulating the pagecache for this inode anyway, > so does it matter?
Not much. It just seems a bit more consistent.
> The main thing I think I'd rather get rid of is: > > if (!pagevec_add(&lru_pvec, page)) > __pagevec_lru_add_file(&lru_pvec); > ... > pagevec_lru_add_file(&lru_pvec); > > Which as Peter points out: > > The ramfs stuff is rather icky in that it adds the pages to the aging > list, marks them dirty, but does not provide a writeout method. > > This will make the paging code scan over them (continuously) trying to > clean them, failing that (lack of writeout method) and putting them back > on the list. > > Not requiring the pages to be added to the LRU would be a really good idea. > They are not discardable, be it in MMU or NOMMU mode, except when the inode > itself is discarded.
Yep, these pages shouldn't be on the LRU at all. I guess that will require some tweaks to core filemap.c code.
> Furthermore, does it really make sense for ramfs to use do_sync_read/write() > and generic_file_aio_read/write(), at least for NOMMU-mode? These add a lot > of overhead, and ramfs doesn't really do either direct I/O or AIO. > > The main point in favour of using these routines is commonality; but they do > add a lot of layers of overhead.
Yes, that code is very general hence always has overhead for each specific client.
> Does ramfs read/write performance matter > than much, I wonder.
I doubt it.
| |