lkml.org 
[lkml]   [2021]   [Sep]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH 2/7] KVM: X86: Synchronize the shadow pagetable before link it
From
Date


On 2021/9/4 00:06, Sean Christopherson wrote:

> -static void mmu_sync_children(struct kvm_vcpu *vcpu,
> - struct kvm_mmu_page *parent)
> +static int mmu_sync_children(struct kvm_vcpu *vcpu,
> + struct kvm_mmu_page *parent, bool can_yield)
> {
> int i;
> struct kvm_mmu_page *sp;
> @@ -2050,7 +2050,15 @@ static void mmu_sync_children(struct kvm_vcpu *vcpu,
> flush |= kvm_sync_page(vcpu, sp, &invalid_list);
> mmu_pages_clear_parents(&parents);
> }
> - if (need_resched() || rwlock_needbreak(&vcpu->kvm->mmu_lock)) {
> + /*
> + * Don't yield if there are fewer than <N> unsync children
> + * remaining, just finish up and get out.
> + */
> + if (parent->unsync_children > SOME_ARBITRARY_THRESHOLD &&
> + (need_resched() || rwlock_needbreak(&vcpu->kvm->mmu_lock))) {
> + if (!can_yield)
> + return -EINTR;
> +


Another thought about this function.

It is courtesy to break when rwlock_needbreak(), but the price is quite high,
with remote flushing to interrupt several pCPUs. I think we can only break
when need_resched().

\
 
 \ /
  Last update: 2021-09-03 18:34    [W:0.059 / U:0.064 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site