lkml.org 
[lkml]   [2017]   [Jul]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH 3.16 121/178] kvm: arm/arm64: Fix locking for kvm_free_stage2_pgd
From
Date
On 16/07/17 14:56, Ben Hutchings wrote:
> 3.16.46-rc1 review patch. If anyone has any objections, please let me know.
>
> ------------------
>
> From: Suzuki K Poulose <suzuki.poulose@arm.com>
>
> commit 8b3405e345b5a098101b0c31b264c812bba045d9 upstream.
>
> In kvm_free_stage2_pgd() we don't hold the kvm->mmu_lock while calling
> unmap_stage2_range() on the entire memory range for the guest. This could
> cause problems with other callers (e.g, munmap on a memslot) trying to
> unmap a range. And since we have to unmap the entire Guest memory range
> holding a spinlock, make sure we yield the lock if necessary, after we
> unmap each PUD range.
>
> Fixes: commit d5d8184d35c9 ("KVM: ARM: Memory virtualization setup")
> Cc: Paolo Bonzini <pbonzin@redhat.com>
> Cc: Marc Zyngier <marc.zyngier@arm.com>
> Cc: Christoffer Dall <christoffer.dall@linaro.org>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> [ Avoid vCPU starvation and lockup detector warnings ]
> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com>
> Signed-off-by: Suzuki K Poulose <suzuki.poulose@arm.com>
> Signed-off-by: Christoffer Dall <cdall@linaro.org>
> [bwh: Backported to 3.16:
> - unmap_stage2_range() is a wrapper around unmap_range(), which is also used for
> HYP page table setup. So unmap_range() should do the cond_resched_lock(), but
> only if kvm != NULL.
> - Adjust context]
> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
> ---
> arch/arm/kvm/mmu.c | 10 ++++++++++
> 1 file changed, 10 insertions(+)
>
> --- a/arch/arm/kvm/mmu.c
> +++ b/arch/arm/kvm/mmu.c
> @@ -199,6 +199,12 @@ static void unmap_range(struct kvm *kvm,
> next = kvm_pgd_addr_end(addr, end);
> if (!pgd_none(*pgd))
> unmap_puds(kvm, pgd, addr, next);
> + /*
> + * If the range is too large, release the kvm->mmu_lock
> + * to prevent starvation and lockup detector warnings.
> + */
> + if (kvm && next != end)
> + cond_resched_lock(&kvm->mmu_lock);
> } while (pgd++, addr = next, addr != end);
> }
>
> @@ -553,6 +559,7 @@ int kvm_alloc_stage2_pgd(struct kvm *kvm
> */
> static void unmap_stage2_range(struct kvm *kvm, phys_addr_t start, u64 size)
> {
> + assert_spin_locked(&kvm->mmu_lock);
> unmap_range(kvm, kvm->arch.pgd, start, size);
> }
>
> @@ -637,7 +644,10 @@ void kvm_free_stage2_pgd(struct kvm *kvm
> if (kvm->arch.pgd == NULL)
> return;
>
> + spin_lock(&kvm->mmu_lock);
> unmap_stage2_range(kvm, 0, KVM_PHYS_SIZE);
> + spin_unlock(&kvm->mmu_lock);
> +
> free_pages((unsigned long)kvm->arch.pgd, S2_PGD_ORDER);
> kvm->arch.pgd = NULL;
> }

Hi Ben,

There is a follow up patch for this one to make sure we check/free the PGD under the
mmu_lock.

commit 6c0d706b563af73 ("kvm: arm/arm64: Fix race in resetting stage2 PGD")

So unless you want to fold that in, this looks alright. Looks like we missed a Cc:
stable for that. I will send it to stable soon.

Cheers
Suzuki



>

\
 
 \ /
  Last update: 2017-07-17 17:13    [W:0.485 / U:0.092 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site