Messages in this thread |  | | Date | Mon, 20 Feb 2012 18:34:40 +1100 | From | NeilBrown <> | Subject | Re: [PATCH 2/2] [RFC] fadvise: Add _VOLATILE,_ISVOLATILE, and _NONVOLATILE flags |
| |
Hi John, thanks for your answers....
> > > The proposed mechanism - at a high level - is for user-space to be able to > > say "This memory is volatile" and then later "this memory is no longer > > volatile". If the content of the memory is still available the second > > request succeeds. If not, it fails.. Well, actually it succeeds but reports > > that some content has been lost. (not sure what happens then - can the app do > > a binary search to find which pages it still has or something). > > The app should expect all was lost in that range.
So... the app has some idea of the real granularity of the cache, which is several objects in one file, and marks them volatile as a whole - then marks them non-volatile as a whole and if that fails it assumes that the whole object is gone. However the kernel doesn't really have any idea of real granularity and so just removes individual pages until it has freed up enough. It could have just corrupted a much bigger object and so the rest of the object is of no value and may as well be freed, but it has no way to know this, so frees something else instead.
Is this a problem? If the typical granularity is a page or two then it is unlikely to hurt. If it is hundreds of pages I think it would mean that we don't make as good use of memory as we could (but it is all heuristics anyway and we probably waste lots of opportunities already so maybe it doesn't matter).
My gut feeling is that seeing the app has concrete knowledge about granularity it should give it to the kernel somehow.
> > > (technically we should probably include the cost to reconstruct the page, > > which the kernel measures as 'seeks' but maybe that isn't necessary). > > Not sure I'm following this.
The shrinker in your code (and the original ashmem) contains:
.seeks = DEFAULT_SEEKS * 4,
This means that objects in this cache are 4 times as expensive to replace as most other caches. (the cost of replacing an entry in the cache is measured in 'seeks' and the default is to assume that it takes 2 seeks to reload and object).
I don't really know what the practical importance of 'seeks' is. Maybe it is close to meaningless, in which case you should probably use DEFAULT_SEEKS like (almost) everyone else. Maybe it is quite relevant, in which case maybe you should expose that setting to user-space somehow. Or maybe 'DEFAULT_SEEKS * 4' is perfect for all possible users of this caching mechanism.
I guess my point is that any non-default value should be justified.
> > > This is implemented by using files in a 'tmpfs' filesystem. These file > > support three new flags to fadvise: > > > > POSIX_FADV_VOLATILE - this marks a range of pages as 'volatile'. They may be > > removed from the page cache as needed, even if they are not 'clean'. > > POSIX_FADV_NONVOLATILE - this marks a range of pages as non-volatile. > > If any pages in the range were previously volatile but have since been > > removed, then a status is returned reporting this. > > POSIX_FADV_ISVOLATILE - this does not actually give any advice to the kernel > > but rather asks a question: Are any of these pages volatile? > > > > > > Is this an accurate description? > > Right now its not tmpfs specific, but otherwise this is pretty spot on. > > > My first thoughts are: > > 1/ is page granularity really needed? Would file granularity be sufficient? > > The current users of similar functionality via ashmem do seem to find > page granularity useful. You can share basically an unlinked tmpfs fd > between two applications and mark and unmark ranges of pages > "volatile" (unpinned in ashmem terms) as needed.
Sharing an unlinked cache between processes certainly seems like a valid case that my model doesn't cover. I feel uncomfortable about different processes being able to unpin each other's pages. It means they need to negotiate with each other to ensure one doesn't unpin a page that the other is using.
If this was a common use case, it would make a lot of sense for the kernel to refcount the pinning so that a range only becomes really unpinned when no-one has it pinned any more.
Do you know any more about these apps that share a cache file? Do they need extra inter-locking (or are they completely hypothetical?).
> > > 2/ POSIX_FADV_ISVOLATILE is a warning sign to me - it doesn't actually > > provide advice. Is this really needed? What for? Because it feels like > > a wrong interface. > > It is more awkward, I agree. And the more I think about it, it seems > like its something we can drop, as it is likely only useful as a probe > before using a page, and using the POSIX_FADV_NONVOLAILE on the range to > be used would also provide the same behavior. So I'll drop it in the > next revision.
Good. That makes me feel happier.
> > > 3/ Given that this is specific to one filesystem, is fadvise really an > > appropriate interface? > > > > (fleshing out the above documentation might be an excellent way to answer > > these questions). > > So, the ashmem implementation is really tmpfs specific, but there's also > the expectation on android devices that there isn't swap, so its more > like ramfs. I'd like to think that this behavior makes some sense on > other filesystems, providing a way to cheaply throw out dirty data > without the cost of hitting the disk. However, the next time the file is > opened, that could cause some really strange inconsistent results, with > some recent pages written out and some stale pages. The vmtruncate would > punch a hole instead of leaving stale data, but that still would have to > hit the disk so its not free. So I'm not really sure if it makes sense > in a totally generic way. That said, it would be easy for now to return > errors if the fs isn't shmem based.
As I think I said somewhere, I cannot see how the functionality makes any sense at all on a storage-backed filesystem - and what you have said about inconsistent on-disk images only re-enforces that. I think it should definitely be ramfs only (maybe tmpfs as well??).
> > Really, I'm not married to any specific interface here. fadvise just > seemed the most logical to me. Given page granularity is needed, what > would be a filesystem specific interface that makes sense here?
OK, let me try again. This looks to me a bit like byte-range locking. locking can already have a filesystem-specific implementation so this could be implemented as a ramfs-specific locking protocol. This would be activated by some mount option (or it could even be a different filesystem type - ramcachefs).
1- a shared lock (F_RDLCK) pins the range in memory and prevents an exclusive lock, or any purge of pages. 2- an exclusive lock (F_WRLCK) is used to create or re-create an object in the cache. 3- when pages are purged a lock-range is created which marks the range as purged and prevents any read lock from succeeding. This lock-range is removed when a write-lock is taken out.
So initially all pages are marked by an internal 'purged' lock indicating that they contain nothing.
Objects can be created by creating a write lock and writing data. Then unlocking (or down grading to a read lock) allows them to be accessed by other processes. Any process that wants to read an object first asks for a shared lock. If this succeeds they are sure that the pages are still available (and that no-one has an exclusive lock). If the shared lock fails then at least one page doesn't exist - probably all are gone. They can then optionally try to get a write lock. Once they get that they can revalidate somehow, or refill the object. When the last lock is removed, the locking code could keep the range information but mark it as unlocked and put it on an lru list.
So 4 sorts of ranges are defined and they cover the entire file: shared locks - these might overlap exclusive locks - these don't overlap purge locks - mark ranges that have been purged or never written pending locks - mark all remaining ranges.
When a shared or exclusive lock is released it becomes a pending lock. When the shrinker fires it converts some number of pending locks to purge locks and discards the pages wholly in them. A shared lock can only be taken when there is a shared or pending lock there. An exclusive lock can be taken when a purge or pending lock is present.
For the most part this doesn't conflict with the more normal usage of byte range locks. However it does mean that a process cannot place a range in a state where some other process is allowed to write, but the kernel is not allowed to purge the pages. I cannot tell if this might be a problem. (it could probably be managed by some convention where locking the first byte in an object gives read/write permission and locking the rest keeps it in cache. One byte by itself will never be purged).
I'm not sure what should happen if you write without first getting a write lock. I guess it should turn a purge lock to a pending lock, but leave an other range unchanged.
NeilBrown [unhandled content-type:application/pgp-signature]
|  |