lkml.org 
[lkml]   [2008]   [Feb]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] mmu notifiers #v5
On Thu, Jan 31, 2008 at 05:44:24PM -0800, Christoph Lameter wrote:
> The trouble is that the invalidates are much more expensive if you have to
> send theses to remote partitions (XPmem). And its really great if you can
> simple tear down everything. Certainly this is a significant improvement
> over the earlier approach but you still have the invalidate_page calls in
> ptep_clear_flush. So they fire needlessly?

Dunno, they certainly fire more frequently than yours, even _pages
fires more frequently than range_start,end but don't forget why!
That's because I've a different spinlock for every 512
ptes/4k-grub-tlbs that are being invalidated... So it pays off in
scalability. I'm unsure if gru could play tricks with your patch, to
still allow faults to still happen in parallel if they're on virtual
addresses not in the same 2M naturally aligned chunk.

> Serializing access in the device driver makes sense and comes with
> additional possiblity of not having to increment page counts all the time.
> So you trade one cacheline dirtying for many that are necessary if you
> always increment the page count.

Note that my #v5 doesn't require to increase the page count all the
time, so GRU will work fine with #v5.

See this comment in my patch:

/*
* invalidate_page[s] is called in atomic context
* after any pte has been updated and before
* dropping the PT lock required to update any Linux pte.
* Once the PT lock will be released the pte will have its
* final value to export through the secondary MMU.
* Before this is invoked any secondary MMU is still ok
* to read/write to the page previously pointed by the
* Linux pte because the old page hasn't been freed yet.
* If required set_page_dirty has to be called internally
* to this method.
*/


invalidate_page[s] is always called before the page is freed. This
will require modifications to the tlb flushing code logic to take
advantage of _pages in certain places. For now it's just safe.

> How does KVM insure the consistency of the shadow page tables? Atomic ops?

A per-VM mmu_lock spinlock is taken to serialize the access, plus
atomic ops for the cpu.

> The GRU has no page table on its own. It populates TLB entries on demand
> using the linux page table. There is no way it can figure out when to
> drop page counts again. The invalidate calls are turned directly into tlb
> flushes.

Yes, this is why it can't serialize follow_page with only the PT lock
with your patch. KVM may do it once you add start,end to range_end
only thanks to the additional pin on the page.


\
 
 \ /
  Last update: 2008-02-01 13:15    [W:0.133 / U:0.940 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site