lkml.org 
[lkml]   [2021]   [Aug]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Date
SubjectRe: [PATCH 2/2] KVM: Guard cpusmask NULL check with CONFIG_CPUMASK_OFFSTACK
On Sat, Aug 21, 2021 at 8:09 AM Sean Christopherson <seanjc@google.com> wrote:
>
> Check for a NULL cpumask_var_t when kicking multiple vCPUs if and only if
> cpumasks are configured to be allocated off-stack. This is a meaningless
> optimization, e.g. avoids a TEST+Jcc and TEST+CMOV on x86, but more
> importantly helps document that the NULL check is necessary even though
> all callers pass in a local variable.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
> virt/kvm/kvm_main.c | 12 ++++++++++--
> 1 file changed, 10 insertions(+), 2 deletions(-)
>
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 786b914db98f..82c5280dd5ce 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -247,7 +247,7 @@ static void ack_flush(void *_completed)
>
> static inline bool kvm_kick_many_cpus(const struct cpumask *cpus, bool wait)
> {
> - if (unlikely(!cpus))
> + if (IS_ENABLED(CONFIG_CPUMASK_OFFSTACK) && unlikely(!cpus))
> cpus = cpu_online_mask;
>
> if (cpumask_empty(cpus))
> @@ -277,6 +277,14 @@ bool kvm_make_vcpus_request_mask(struct kvm *kvm, unsigned int req,
> if (!(req & KVM_REQUEST_NO_WAKEUP) && kvm_vcpu_wake_up(vcpu))
> continue;
>
> + /*
> + * tmp can be NULL if cpumasks are allocated off stack, as
> + * allocation of the mask is deliberately not fatal and is
> + * handled by falling back to kicking all online CPUs.
> + */
> + if (IS_ENABLED(CONFIG_CPUMASK_OFFSTACK) && !tmp)
> + continue;
> +

Hello, Sean

I don't think it is a good idea to reinvent the cpumask_available().
You can rework the patch as the following code if cpumask_available()
fits for you.

Thanks
Lai

diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index 3e67c93ca403..ca043ec7ed74 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -245,9 +245,11 @@ static void ack_flush(void *_completed)
{
}

-static inline bool kvm_kick_many_cpus(const struct cpumask *cpus, bool wait)
+static inline bool kvm_kick_many_cpus(cpumask_var_t tmp, bool wait)
{
- if (unlikely(!cpus))
+ const struct cpumask *cpus = tmp;
+
+ if (unlikely(!cpumask_available(tmp)))
cpus = cpu_online_mask;

if (cpumask_empty(cpus))
@@ -278,7 +280,7 @@ bool kvm_make_vcpus_request_mask(struct kvm *kvm,
unsigned int req,
if (!(req & KVM_REQUEST_NO_WAKEUP) && kvm_vcpu_wake_up(vcpu))
continue;

- if (tmp != NULL && cpu != -1 && cpu != me &&
+ if (cpumask_available(tmp) && cpu != -1 && cpu != me &&
kvm_request_needs_ipi(vcpu, req))
__cpumask_set_cpu(cpu, tmp);
}
> /*
> * Note, the vCPU could get migrated to a different pCPU at any
> * point after kvm_request_needs_ipi(), which could result in
> @@ -288,7 +296,7 @@ bool kvm_make_vcpus_request_mask(struct kvm *kvm, unsigned int req,
> * were reading SPTEs _before_ any changes were finalized. See
> * kvm_vcpu_kick() for more details on handling requests.
> */
> - if (tmp != NULL && kvm_request_needs_ipi(vcpu, req)) {
> + if (kvm_request_needs_ipi(vcpu, req)) {
> cpu = READ_ONCE(vcpu->cpu);
> if (cpu != -1 && cpu != me)
> __cpumask_set_cpu(cpu, tmp);
> --
> 2.33.0.rc2.250.ged5fa647cd-goog
>

\
 
 \ /
  Last update: 2021-08-25 06:06    [W:0.092 / U:0.044 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site