lkml.org 
[lkml]   [2022]   [Feb]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH v2 14/18] KVM: x86/mmu: avoid indirect call for get_cr3
On Thu, Feb 24, 2022, Maxim Levitsky wrote:
> Not sure though if that is worth it though. IMHO it would be better to
> convert mmu callbacks (and nested ops callbacks, etc) to static calls.

nested_ops can utilize static_call(), mmu hooks cannot. static_call() patches
the code, which means there cannot be multiple targets at any given time. The
"static" part refers to the target not changing, generally for the lifetime of
the kernel/module in question. Even with TDP that doesn't hold true due to
nested virtualization.

We could selectively use INDIRECT_CALL_*() for some of the MMU calls, but given
how few cases and targets we really care about, I prefer our homebrewed manual
checks as theres less macro maze to navigate.

E.g. to convert the TDP fault case

diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index 1d0c1904d69a..940ec6a9d284 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -3,6 +3,8 @@
#define __KVM_X86_MMU_H

#include <linux/kvm_host.h>
+#include <linux/indirect_call_wrapper.h>
+
#include "kvm_cache_regs.h"
#include "cpuid.h"

@@ -169,7 +171,8 @@ struct kvm_page_fault {
bool map_writable;
};

-int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault);
+INDIRECT_CALLABLE_DECLARE(int kvm_tdp_page_fault(struct kvm_vcpu *vcpu,
+ struct kvm_page_fault *fault));

extern int nx_huge_pages;
static inline bool is_nx_huge_page_enabled(void)
@@ -196,11 +199,9 @@ static inline int kvm_mmu_do_page_fault(struct kvm_vcpu *vcpu, gpa_t cr2_or_gpa,
.req_level = PG_LEVEL_4K,
.goal_level = PG_LEVEL_4K,
};
-#ifdef CONFIG_RETPOLINE
- if (fault.is_tdp)
- return kvm_tdp_page_fault(vcpu, &fault);
-#endif
- return vcpu->arch.mmu->page_fault(vcpu, &fault);
+ struct kvm_mmu *mmu = vcpu->arch.mmu;
+
+ return INDIRECT_CALL_1(mmu->page_fault, kvm_tdp_page_fault, vcpu, &fault);
}

/*
diff --git a/arch/x86/kvm/mmu/mmu.c b/arch/x86/kvm/mmu/mmu.c
index c1deaec795c2..a3ad1bc58859 100644
--- a/arch/x86/kvm/mmu/mmu.c
+++ b/arch/x86/kvm/mmu/mmu.c
@@ -4055,7 +4055,8 @@ int kvm_handle_page_fault(struct kvm_vcpu *vcpu, u64 error_code,
}
EXPORT_SYMBOL_GPL(kvm_handle_page_fault);

-int kvm_tdp_page_fault(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
+INDIRECT_CALLABLE_SCOPE int kvm_tdp_page_fault(struct kvm_vcpu *vcpu,
+ struct kvm_page_fault *fault)
{
while (fault->max_level > PG_LEVEL_4K) {
int page_num = KVM_PAGES_PER_HPAGE(fault->max_level);
\
 
 \ /
  Last update: 2022-02-24 16:13    [W:1.244 / U:0.072 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site