lkml.org 
[lkml]   [2019]   [Jan]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH v4 05/10] KVM/x86: expose MSR_IA32_PERF_CAPABILITIES to the guest
On Wed, Dec 26, 2018 at 2:01 AM Wei Wang <wei.w.wang@intel.com> wrote:
>
> Bits [0, 5] of MSR_IA32_PERF_CAPABILITIES tell about the format of
> the addresses stored in the LBR stack. Expose those bits to the guest
> when the guest lbr feature is enabled.
>
> Signed-off-by: Wei Wang <wei.w.wang@intel.com>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Andi Kleen <ak@linux.intel.com>
> ---
> arch/x86/include/asm/perf_event.h | 2 ++
> arch/x86/kvm/cpuid.c | 2 +-
> arch/x86/kvm/vmx.c | 9 +++++++++
> 3 files changed, 12 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/include/asm/perf_event.h b/arch/x86/include/asm/perf_event.h
> index 2f82795..eee09b7 100644
> --- a/arch/x86/include/asm/perf_event.h
> +++ b/arch/x86/include/asm/perf_event.h
> @@ -87,6 +87,8 @@
> #define ARCH_PERFMON_BRANCH_MISSES_RETIRED 6
> #define ARCH_PERFMON_EVENTS_COUNT 7
>
> +#define X86_PERF_CAP_MASK_LBR_FMT 0x3f
> +
> /*
> * Intel "Architectural Performance Monitoring" CPUID
> * detection/enumeration details:
> diff --git a/arch/x86/kvm/cpuid.c b/arch/x86/kvm/cpuid.c
> index 7bcfa61..3b8a57b 100644
> --- a/arch/x86/kvm/cpuid.c
> +++ b/arch/x86/kvm/cpuid.c
> @@ -365,7 +365,7 @@ static inline int __do_cpuid_ent(struct kvm_cpuid_entry2 *entry, u32 function,
> F(XMM3) | F(PCLMULQDQ) | 0 /* DTES64, MONITOR */ |
> 0 /* DS-CPL, VMX, SMX, EST */ |
> 0 /* TM2 */ | F(SSSE3) | 0 /* CNXT-ID */ | 0 /* Reserved */ |
> - F(FMA) | F(CX16) | 0 /* xTPR Update, PDCM */ |
> + F(FMA) | F(CX16) | 0 /* xTPR Update*/ | F(PDCM) |
> F(PCID) | 0 /* Reserved, DCA */ | F(XMM4_1) |
> F(XMM4_2) | F(X2APIC) | F(MOVBE) | F(POPCNT) |
> 0 /* Reserved*/ | F(AES) | F(XSAVE) | 0 /* OSXSAVE */ | F(AVX) |
> diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
> index 8d5d984..ee02967 100644
> --- a/arch/x86/kvm/vmx.c
> +++ b/arch/x86/kvm/vmx.c
> @@ -4161,6 +4161,13 @@ static int vmx_get_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> return 1;
> msr_info->data = vcpu->arch.ia32_xss;
> break;
> + case MSR_IA32_PERF_CAPABILITIES:
> + if (!boot_cpu_has(X86_FEATURE_PDCM))
> + return 1;
> + msr_info->data = native_read_msr(MSR_IA32_PERF_CAPABILITIES);

Since this isn't guarded by vcpu->kvm->arch.lbr_in_guest, it breaks
backwards compatibility, doesn't it?

> + if (vcpu->kvm->arch.lbr_in_guest)
> + msr_info->data &= X86_PERF_CAP_MASK_LBR_FMT;
> + break;
> case MSR_TSC_AUX:
> if (!msr_info->host_initiated &&
> !guest_cpuid_has(vcpu, X86_FEATURE_RDTSCP))
> @@ -4343,6 +4350,8 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, struct msr_data *msr_info)
> else
> clear_atomic_switch_msr(vmx, MSR_IA32_XSS);
> break;
> + case MSR_IA32_PERF_CAPABILITIES:
> + return 1; /* RO MSR */
> case MSR_TSC_AUX:
> if (!msr_info->host_initiated &&
> !guest_cpuid_has(vcpu, X86_FEATURE_RDTSCP))
> --
> 2.7.4
>

\
 
 \ /
  Last update: 2019-01-03 00:41    [W:0.229 / U:0.132 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site