lkml.org 
[lkml]   [2006]   [Sep]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [patch 3/9] Guest page hinting: volatile page cache.
From
Date
On Mon, 2006-09-04 at 13:21 +0200, Martin Schwidefsky wrote:
> Any kind of locking won't work. You need the information that a page has
> been discarded until the page has been freed. Only then the fact that
> the page has been discarded may enter nirvana. Any kind of lock needs to
> be freed again to allow the next discard fault to happen. Since you
> don't when the last page reference is returned you cannot hold the lock
> until the page is free.

First of all, you *CAN* sleep with the BKL held. ;)

Why doesn't the normal lock_page() help? It can sleep, too?

As far as simplifying the patches, I feel like some of the
page_make_stable() stuff should be done inside of page_cache_get().
Perhaps the API needs to be changed so that page_cache_get()s can fail.

There are also a ton of "mapping == page->mapping" tests all over.
Perhaps you need a page_still_in_mapping(page, mapping) call that also
checks the page's discard state.

I also have the feeling that every single page_host_discards() check
which is actually placed in the VM code shouldn't be there. The ones in
page_make_stable() and friends are OK, but the ones in
shrink_inactive_list() seem bogus to me. Looks like they should be
covered up in some _other_ function that checks PageDiscarded().

You could even put these things in (what are now) simple functions like
lru_to_page(). The logic would be along the lines of, whenever I am
looking into the LRU, I need to make sure this page is still actually
there.

As for the locking, imagine a seqlock (per-zone, node, section, hash,
anon_vma, mapping, whatever...). A write is taken any time that
PG_discard would have been set, and the page is placed in to a list so
that it can be found (the data structure isn't important now). All of
the places that currently check PG_discard would go and take a read on
the seqlock. If they fail to acquire it (what is normally now a loop),
they would go look in the list to see if the page they are interested in
is there. If it is, then they treat it as dicarded, otherwise they
proceed normally. So, the operation is normally very cheap (a
non-atomic read). It is very expensive _during_ a discard because of
the traversal of the list, but these should be rare.

The structure storing the page could be like this:

struct page_list {
struct list_head list;
struct page *page;
};

So that it doesn't require any extra space in the struct page, and
limits the overhead to only the people actually using the page discard
mechanism.

-- Dave

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2006-09-05 20:31    [W:0.071 / U:0.244 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site