lkml.org 
[lkml]   [2021]   [Mar]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] KVM: x86/mmu: Skip !MMU-present SPTEs when removing SP in exclusive mode
On Wed, Mar 10, 2021, Paolo Bonzini wrote:
> On 10/03/21 01:30, Sean Christopherson wrote:
> > diff --git a/arch/x86/kvm/mmu/tdp_mmu.c b/arch/x86/kvm/mmu/tdp_mmu.c
> > index 50ef757c5586..f0c99fa04ef2 100644
> > --- a/arch/x86/kvm/mmu/tdp_mmu.c
> > +++ b/arch/x86/kvm/mmu/tdp_mmu.c
> > @@ -323,7 +323,18 @@ static void handle_removed_tdp_mmu_page(struct kvm *kvm, u64 *pt,
> > cpu_relax();
> > }
> > } else {
> > + /*
> > + * If the SPTE is not MMU-present, there is no backing
> > + * page associated with the SPTE and so no side effects
> > + * that need to be recorded, and exclusive ownership of
> > + * mmu_lock ensures the SPTE can't be made present.
> > + * Note, zapping MMIO SPTEs is also unnecessary as they
> > + * are guarded by the memslots generation, not by being
> > + * unreachable.
> > + */
> > old_child_spte = READ_ONCE(*sptep);
> > + if (!is_shadow_present_pte(old_child_spte))
> > + continue;
> > /*
> > * Marking the SPTE as a removed SPTE is not
>
> Ben, do you plan to make this path take mmu_lock for read? If so, this
> wouldn't be too useful IIUC.

I can see kvm_mmu_zap_all_fast()->kvm_tdp_mmu_zap_all() moving to a shared-mode
flow, but I don't think we'll ever want to move away from exclusive-mode zapping
for kvm_arch_flush_shadow_all()->kvm_mmu_zap_all()->kvm_tdp_mmu_zap_all(). In
that case, the VM is dead or dying; freeing memory should be done as quickly as
possible.

\
 
 \ /
  Last update: 2021-03-10 22:15    [W:0.050 / U:0.780 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site