lkml.org 
[lkml]   [2020]   [Feb]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH v4 1/2] KVM: X86: Less kvmclock sync induced vmexits after VM boots
Date
From: Wanpeng Li <wanpengli@tencent.com>

In the progress of vCPUs creation, it queues a kvmclock sync worker to the global
workqueue before each vCPU creation completes. Each worker will be scheduled
after 300 * HZ delay and request a kvmclock update for all vCPUs and kick them
out. This is especially worse when scaling to large VMs due to a lot of vmexits.
Just one worker as a leader to trigger the kvmclock sync request for all vCPUs is
enough.

Signed-off-by: Wanpeng Li <wanpengli@tencent.com>
---
v3 -> v4:
* check vcpu->vcpu_idx

arch/x86/kvm/x86.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index fb5d64e..d0ba2d4 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -9390,8 +9390,9 @@ void kvm_arch_vcpu_postcreate(struct kvm_vcpu *vcpu)
if (!kvmclock_periodic_sync)
return;

- schedule_delayed_work(&kvm->arch.kvmclock_sync_work,
- KVMCLOCK_SYNC_PERIOD);
+ if (vcpu->vcpu_idx == 0)
+ schedule_delayed_work(&kvm->arch.kvmclock_sync_work,
+ KVMCLOCK_SYNC_PERIOD);
}

void kvm_arch_vcpu_destroy(struct kvm_vcpu *vcpu)
--
2.7.4
\
 
 \ /
  Last update: 2020-02-18 02:19    [W:1.892 / U:0.104 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site