lkml.org 
[lkml]   [2022]   [Feb]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 1/3] KVM: SVM: extract avic_ring_doorbell
On Fri, Feb 11, 2022, Paolo Bonzini wrote:
> From: Maxim Levitsky <mlevitsk@redhat.com>
>
> The check on the current CPU adds an extra level of indentation to
> svm_deliver_avic_intr and conflates documentation on what happens
> if the vCPU exits (of interest to svm_deliver_avic_intr) and migrates
> (only of interest to avic_ring_doorbell, which calls get/put_cpu()).
> Extract the wrmsr to a separate function and rewrite the
> comment in svm_deliver_avic_intr().
>
> Co-developed-by: Paolo Bonzini <pbonzini@redhat.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
> Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

Bad SoB chain, should be:

Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>
Co-developed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>

Interestingly, git-apply drops the second, redundant SoB and yields

Co-developed-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Maxim Levitsky <mlevitsk@redhat.com>

Which will probably get you yelled at by Stephen's scripts :-)

A few nits below...

Reviewed-by: Sean Christopherson <seanjc@google.com>

> ---
> arch/x86/kvm/svm/avic.c | 33 ++++++++++++++++++++++-----------
> 1 file changed, 22 insertions(+), 11 deletions(-)
>
> diff --git a/arch/x86/kvm/svm/avic.c b/arch/x86/kvm/svm/avic.c
> index 3f9b48732aea..4d1baf5c8f6a 100644
> --- a/arch/x86/kvm/svm/avic.c
> +++ b/arch/x86/kvm/svm/avic.c
> @@ -269,6 +269,24 @@ static int avic_init_backing_page(struct kvm_vcpu *vcpu)
> return 0;
> }
>
> +

Spurious newline.

> +static void avic_ring_doorbell(struct kvm_vcpu *vcpu)
> +{
> + /*
> + * Note, the vCPU could get migrated to a different pCPU at any
> + * point, which could result in signalling the wrong/previous
> + * pCPU. But if that happens the vCPU is guaranteed to do a
> + * VMRUN (after being migrated) and thus will process pending
> + * interrupts, i.e. a doorbell is not needed (and the spurious
> + * one is harmless).

Please run these out to 80 chars, it saves a whole line!

/*
* Note, the vCPU could get migrated to a different pCPU at any point,
* which could result in signalling the wrong/previous pCPU. But if
* that happens the vCPU is guaranteed to do a VMRUN (after being
* migrated) and thus will process pending interrupts, i.e. a doorbell
* is not needed (and the spurious one is harmless).
*/

> + */
> + int cpu = READ_ONCE(vcpu->cpu);
> +
> + if (cpu != get_cpu())
> + wrmsrl(MSR_AMD64_SVM_AVIC_DOORBELL, kvm_cpu_get_apicid(cpu));
> + put_cpu();
> +}
> +
> static void avic_kick_target_vcpus(struct kvm *kvm, struct kvm_lapic *source,
> u32 icrl, u32 icrh)
> {

\
 
 \ /
  Last update: 2022-02-11 17:45    [W:0.137 / U:0.292 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site