lkml.org 
[lkml]   [2010]   [Jun]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    Subject[RFC][PATCH 7/9] make kvm_get_kvm() more robust
    From
    Date

    The comment tells most of the story here. This patch guarantees
    that once a user decrements kvm->users_count to 0 that no one
    will increment it again.

    We'll need this in a moment because we are going to use
    kvm->users_count as a more generic refcount.

    Signed-off-by: Dave Hansen <dave@linux.vnet.ibm.com>
    ---

    linux-2.6.git-dave/include/linux/kvm_host.h | 2 -
    linux-2.6.git-dave/virt/kvm/kvm_main.c | 32 +++++++++++++++++++++++++---
    2 files changed, 30 insertions(+), 4 deletions(-)

    diff -puN include/linux/kvm_host.h~make-kvm_get_kvm-more-robust include/linux/kvm_host.h
    --- linux-2.6.git/include/linux/kvm_host.h~make-kvm_get_kvm-more-robust 2010-06-11 08:39:16.000000000 -0700
    +++ linux-2.6.git-dave/include/linux/kvm_host.h 2010-06-11 08:39:16.000000000 -0700
    @@ -247,7 +247,7 @@ int kvm_init(void *opaque, unsigned vcpu
    struct module *module);
    void kvm_exit(void);

    -void kvm_get_kvm(struct kvm *kvm);
    +int kvm_get_kvm(struct kvm *kvm);
    void kvm_put_kvm(struct kvm *kvm);

    static inline struct kvm_memslots *kvm_memslots(struct kvm *kvm)
    diff -puN virt/kvm/kvm_main.c~make-kvm_get_kvm-more-robust virt/kvm/kvm_main.c
    --- linux-2.6.git/virt/kvm/kvm_main.c~make-kvm_get_kvm-more-robust 2010-06-11 08:39:16.000000000 -0700
    +++ linux-2.6.git-dave/virt/kvm/kvm_main.c 2010-06-11 08:39:16.000000000 -0700
    @@ -496,9 +496,30 @@ static void kvm_destroy_vm(struct kvm *k
    mmdrop(mm);
    }

    -void kvm_get_kvm(struct kvm *kvm)
    +/*
    + * Once the counter goes to 0, we destroy the
    + * kvm object. Do not allow additional refs
    + * to be obtained once this occurs.
    + *
    + * Any calls which are done via the kvm fd
    + * could use atomic_inc(). That is because
    + * ->users_count is set to 1 when the kvm fd
    + * is created, and stays at least 1 while
    + * the fd exists.
    + *
    + * But, those calls are currently rare, so do
    + * this (more expensive) atomic_add_unless()
    + * to keep the number of functions down.
    + *
    + * Returns 0 if the reference was obtained
    + * successfully.
    + */
    +int kvm_get_kvm(struct kvm *kvm)
    {
    - atomic_inc(&kvm->users_count);
    + int did_add = atomic_add_unless(&kvm->users_count, 1, 0);
    + if (did_add)
    + return 0;
    + return -EBUSY;
    }
    EXPORT_SYMBOL_GPL(kvm_get_kvm);

    @@ -1332,7 +1353,12 @@ static int kvm_vm_ioctl_create_vcpu(stru
    BUG_ON(kvm->vcpus[atomic_read(&kvm->online_vcpus)]);

    /* Now it's all set up, let userspace reach it */
    - kvm_get_kvm(kvm);
    + r = kvm_get_kvm(kvm);
    + /*
    + * Getting called via the kvm fd _should_ guarantee
    + * that we can always get a reference.
    + */
    + WARN_ON(r);
    r = create_vcpu_fd(vcpu);
    if (r < 0) {
    kvm_put_kvm(kvm);
    diff -puN arch/x86/kvm/mmu.c~make-kvm_get_kvm-more-robust arch/x86/kvm/mmu.c
    _

    \
     
     \ /
      Last update: 2010-06-15 15:59    [W:0.024 / U:87.368 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site