lkml.org 
[lkml]   [2018]   [Dec]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 4.19 063/139] kvm: mmu: Fix race in emulated page table writes
    Date
    4.19-stable review patch.  If anyone has any objections, please let me know.

    ------------------

    From: Junaid Shahid <junaids@google.com>

    commit 0e0fee5c539b61fdd098332e0e2cc375d9073706 upstream.

    When a guest page table is updated via an emulated write,
    kvm_mmu_pte_write() is called to update the shadow PTE using the just
    written guest PTE value. But if two emulated guest PTE writes happened
    concurrently, it is possible that the guest PTE and the shadow PTE end
    up being out of sync. Emulated writes do not mark the shadow page as
    unsync-ed, so this inconsistency will not be resolved even by a guest TLB
    flush (unless the page was marked as unsync-ed at some other point).

    This is fixed by re-reading the current value of the guest PTE after the
    MMU lock has been acquired instead of just using the value that was
    written prior to calling kvm_mmu_pte_write().

    Signed-off-by: Junaid Shahid <junaids@google.com>
    Reviewed-by: Wanpeng Li <wanpengli@tencent.com>
    Cc: stable@vger.kernel.org
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

    ---
    arch/x86/kvm/mmu.c | 27 +++++++++------------------
    1 file changed, 9 insertions(+), 18 deletions(-)

    --- a/arch/x86/kvm/mmu.c
    +++ b/arch/x86/kvm/mmu.c
    @@ -5013,9 +5013,9 @@ static bool need_remote_flush(u64 old, u
    }

    static u64 mmu_pte_write_fetch_gpte(struct kvm_vcpu *vcpu, gpa_t *gpa,
    - const u8 *new, int *bytes)
    + int *bytes)
    {
    - u64 gentry;
    + u64 gentry = 0;
    int r;

    /*
    @@ -5027,22 +5027,12 @@ static u64 mmu_pte_write_fetch_gpte(stru
    /* Handle a 32-bit guest writing two halves of a 64-bit gpte */
    *gpa &= ~(gpa_t)7;
    *bytes = 8;
    - r = kvm_vcpu_read_guest(vcpu, *gpa, &gentry, 8);
    - if (r)
    - gentry = 0;
    - new = (const u8 *)&gentry;
    }

    - switch (*bytes) {
    - case 4:
    - gentry = *(const u32 *)new;
    - break;
    - case 8:
    - gentry = *(const u64 *)new;
    - break;
    - default:
    - gentry = 0;
    - break;
    + if (*bytes == 4 || *bytes == 8) {
    + r = kvm_vcpu_read_guest_atomic(vcpu, *gpa, &gentry, *bytes);
    + if (r)
    + gentry = 0;
    }

    return gentry;
    @@ -5146,8 +5136,6 @@ static void kvm_mmu_pte_write(struct kvm

    pgprintk("%s: gpa %llx bytes %d\n", __func__, gpa, bytes);

    - gentry = mmu_pte_write_fetch_gpte(vcpu, &gpa, new, &bytes);
    -
    /*
    * No need to care whether allocation memory is successful
    * or not since pte prefetch is skiped if it does not have
    @@ -5156,6 +5144,9 @@ static void kvm_mmu_pte_write(struct kvm
    mmu_topup_memory_caches(vcpu);

    spin_lock(&vcpu->kvm->mmu_lock);
    +
    + gentry = mmu_pte_write_fetch_gpte(vcpu, &gpa, &bytes);
    +
    ++vcpu->kvm->stat.mmu_pte_write;
    kvm_mmu_audit(vcpu, AUDIT_PRE_PTE_WRITE);


    \
     
     \ /
      Last update: 2018-12-04 12:30    [W:2.977 / U:0.132 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site