lkml.org 
[lkml]   [2022]   [Aug]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 5.19 0592/1157] KVM: x86/mmu: Drop RWX=0 SPTEs during ept_sync_page()
    Date
    From: Sean Christopherson <seanjc@google.com>

    [ Upstream commit 9fb3565743d58352f00964bf47213b88aff4bb82 ]

    All of sync_page()'s existing checks filter out only !PRESENT gPTE,
    because without execute-only, all upper levels are guaranteed to be at
    least READABLE. However, if EPT with execute-only support is in use by
    L1, KVM can create an SPTE that is shadow-present but guest-inaccessible
    (RWX=0) if the upper level combined permissions are R (or RW) and
    the leaf EPTE is changed from R (or RW) to X. Because the EPTE is
    considered present when viewed in isolation, and no reserved bits are set,
    FNAME(prefetch_invalid_gpte) will consider the GPTE valid, and cause a
    not-present SPTE to be created.

    The SPTE is "correct": the guest translation is inaccessible because
    the combined protections of all levels yield RWX=0, and KVM will just
    redirect any vmexits to the guest. If EPT A/D bits are disabled, KVM
    can mistake the SPTE for an access-tracked SPTE, but again such confusion
    isn't fatal, as the "saved" protections are also RWX=0. However,
    creating a useless SPTE in general means that KVM messed up something,
    even if this particular goof didn't manifest as a functional bug.
    So, drop SPTEs whose new protections will yield a RWX=0 SPTE, and
    add a WARN in make_spte() to detect creation of SPTEs that will
    result in RWX=0 protections.

    Fixes: d95c55687e11 ("kvm: mmu: track read permission explicitly for shadow EPT page tables")
    Cc: David Matlack <dmatlack@google.com>
    Cc: Ben Gardon <bgardon@google.com>
    Signed-off-by: Sean Christopherson <seanjc@google.com>
    Message-Id: <20220513195000.99371-2-seanjc@google.com>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    Signed-off-by: Sasha Levin <sashal@kernel.org>
    ---
    arch/x86/kvm/mmu/paging_tmpl.h | 9 ++++++++-
    arch/x86/kvm/mmu/spte.c | 2 ++
    2 files changed, 10 insertions(+), 1 deletion(-)

    diff --git a/arch/x86/kvm/mmu/paging_tmpl.h b/arch/x86/kvm/mmu/paging_tmpl.h
    index db80f7ccaa4e..1576e65b3b1f 100644
    --- a/arch/x86/kvm/mmu/paging_tmpl.h
    +++ b/arch/x86/kvm/mmu/paging_tmpl.h
    @@ -1053,7 +1053,14 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp)
    if (sync_mmio_spte(vcpu, &sp->spt[i], gfn, pte_access))
    continue;

    - if (gfn != sp->gfns[i]) {
    + /*
    + * Drop the SPTE if the new protections would result in a RWX=0
    + * SPTE or if the gfn is changing. The RWX=0 case only affects
    + * EPT with execute-only support, i.e. EPT without an effective
    + * "present" bit, as all other paging modes will create a
    + * read-only SPTE if pte_access is zero.
    + */
    + if ((!pte_access && !shadow_present_mask) || gfn != sp->gfns[i]) {
    drop_spte(vcpu->kvm, &sp->spt[i]);
    flush = true;
    continue;
    diff --git a/arch/x86/kvm/mmu/spte.c b/arch/x86/kvm/mmu/spte.c
    index ba1be0159095..186fa97d4375 100644
    --- a/arch/x86/kvm/mmu/spte.c
    +++ b/arch/x86/kvm/mmu/spte.c
    @@ -143,6 +143,8 @@ bool make_spte(struct kvm_vcpu *vcpu, struct kvm_mmu_page *sp,
    u64 spte = SPTE_MMU_PRESENT_MASK;
    bool wrprot = false;

    + WARN_ON_ONCE(!pte_access && !shadow_present_mask);
    +
    if (sp->role.ad_disabled)
    spte |= SPTE_TDP_AD_DISABLED_MASK;
    else if (kvm_mmu_page_ad_need_write_protect(sp))
    --
    2.35.1


    \
     
     \ /
      Last update: 2022-08-16 02:09    [W:2.618 / U:2.052 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site