lkml.org 
[lkml]   [2008]   [Feb]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] mmu notifiers #v6
On Wed, Feb 20, 2008 at 11:39:42AM +0100, Andrea Arcangeli wrote:
> Given Nick's comments I ported my version of the mmu notifiers to
> latest mainline. There are no known bugs AFIK and it's obviously safe
> (nothing is allowed to schedule inside rcu_read_lock taken by
> mmu_notifier() with my patch).
> ....

I ported the GRU driver to use the latest #v6 patch and ran a series of
tests on it using our system simulator. The simulator is slow so true
stress or swapping is not possible - at least within a finite amount of
time.

Functionally, the #v6 patch seems to work for the GRU. However, I did
notice two significant differences that make the #v6 performance worse for
the GRU than Christoph's patch. I think one difference is easily fixable
but the other is more difficult:

- the location of the mmu_notifier_release() callout is at a
different place in the 2 patches. Christoph has the callout
BEFORE the call to unmap_vmas() whereas you have it AFTER. The
net result is that the GRU does a LOT of 1-page TLB flushes
during process teardown. These flushes are not done with
Christops's patch.

- the range callouts in Christoph's patch benefit the GRU because
multiple TLB entries can be flushed with a single GRU
instruction (the GRU hardware supports a range flush using a
vaddr & length). The #v6 patch does a TLB flush for each page in
the range. Flushing on the GRU is slow so being able to flush
multiple pages with a single request is a benefit.

Seems like the latter difference could be significant for other users
of mmu notifiers.


--- jack


\
 
 \ /
  Last update: 2008-02-20 22:07    [W:2.787 / U:0.304 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site