[lkml]   [2010]   [Jun]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
    Patch in this message
    Subject[RFC][PATCH 9/9] make kvm mmu shrinker more aggressive

    In a previous patch, we removed the 'nr_to_scan' tracking.
    It was not being used to track the number of objects
    scanned, so we stopped using it entirely. Here, we
    strart using it again.

    The theory here is simple; if we already have the refcount
    and the kvm->mmu_lock, then we should do as much work as
    possible under the lock. The downside is that we're less
    fair about the KVM instances from which we reclaim. Each
    call to mmu_shrink() will tend to "pick on" one instance,
    after which it gets moved to the end of the list and left
    alone for a while.

    If mmu_shrink() has already done a significant amount of
    scanning, the use of 'nr_to_scan' inside shrink_kvm_mmu()
    will also ensure that we do not over-reclaim when we have
    already done a lot of work in this call.

    In the end, this patch defines a "scan" as:
    1. An attempt to acquire a refcount on a 'struct kvm'
    2. freeing a kvm mmu page

    This would probably be most ideal if we can expose some
    of the work done by kvm_mmu_remove_some_alloc_mmu_pages()
    as also counting as scanning, but I think we have churned
    enough for the moment.

    Signed-off-by: Dave Hansen <>

    linux-2.6.git-dave/arch/x86/kvm/mmu.c | 11 ++++++-----
    1 file changed, 6 insertions(+), 5 deletions(-)

    diff -puN arch/x86/kvm/mmu.c~make-shrinker-more-aggressive arch/x86/kvm/mmu.c
    --- linux-2.6.git/arch/x86/kvm/mmu.c~make-shrinker-more-aggressive 2010-06-14 11:30:44.000000000 -0700
    +++ linux-2.6.git-dave/arch/x86/kvm/mmu.c 2010-06-14 11:38:04.000000000 -0700
    @@ -2935,8 +2935,10 @@ static int shrink_kvm_mmu(struct kvm *kv

    idx = srcu_read_lock(&kvm->srcu);
    - if (kvm->arch.n_used_mmu_pages > 0)
    - freed_pages = kvm_mmu_remove_some_alloc_mmu_pages(kvm);
    + while (nr_to_scan > 0 && kvm->arch.n_used_mmu_pages > 0) {
    + freed_pages += kvm_mmu_remove_some_alloc_mmu_pages(kvm);
    + nr_to_scan--;
    + }

    srcu_read_unlock(&kvm->srcu, idx);
    @@ -2952,7 +2954,6 @@ static int shrink_kvm_mmu(struct kvm *kv
    static int mmu_shrink(int nr_to_scan, gfp_t gfp_mask)
    int err;
    - int freed;
    struct kvm *kvm;

    if (nr_to_scan == 0)
    @@ -2989,11 +2990,11 @@ retry:
    * operation itself.
    - freed = shrink_kvm_mmu(kvm, nr_to_scan);
    + nr_to_scan -= shrink_kvm_mmu(kvm, nr_to_scan);


    - if (!freed && nr_to_scan > 0)
    + if (nr_to_scan > 0)
    goto retry;

    diff -puN arch/x86/kvm/x86.c~make-shrinker-more-aggressive arch/x86/kvm/x86.c
    diff -puN arch/x86/include/asm/kvm_host.h~make-shrinker-more-aggressive arch/x86/include/asm/kvm_host.h

     \ /
      Last update: 2010-06-15 15:59    [W:0.021 / U:72.212 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site