Messages in this thread | | | Date | Mon, 24 Jul 2023 17:34:51 +0800 | Subject | Re: [PATCH v7 12/12] KVM: arm64: Use TLBI range-based intructions for unmap | From | Shaoqin Huang <> |
| |
Hi Raghavendra,
On 7/22/23 10:22, Raghavendra Rao Ananta wrote: > The current implementation of the stage-2 unmap walker traverses > the given range and, as a part of break-before-make, performs > TLB invalidations with a DSB for every PTE. A multitude of this > combination could cause a performance bottleneck on some systems. > > Hence, if the system supports FEAT_TLBIRANGE, defer the TLB > invalidations until the entire walk is finished, and then > use range-based instructions to invalidate the TLBs in one go. > Condition deferred TLB invalidation on the system supporting FWB, > as the optimization is entirely pointless when the unmap walker > needs to perform CMOs. > > Rename stage2_put_pte() to stage2_unmap_put_pte() as the function > now serves the stage-2 unmap walker specifically, rather than > acting generic. > > Signed-off-by: Raghavendra Rao Ananta <rananta@google.com> > --- > arch/arm64/kvm/hyp/pgtable.c | 67 +++++++++++++++++++++++++++++++----- > 1 file changed, 58 insertions(+), 9 deletions(-) > > diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c > index 5ef098af1736..cf88933a2ea0 100644 > --- a/arch/arm64/kvm/hyp/pgtable.c > +++ b/arch/arm64/kvm/hyp/pgtable.c > @@ -831,16 +831,54 @@ static void stage2_make_pte(const struct kvm_pgtable_visit_ctx *ctx, kvm_pte_t n > smp_store_release(ctx->ptep, new); > } > > -static void stage2_put_pte(const struct kvm_pgtable_visit_ctx *ctx, struct kvm_s2_mmu *mmu, > - struct kvm_pgtable_mm_ops *mm_ops) > +struct stage2_unmap_data { > + struct kvm_pgtable *pgt; > + bool defer_tlb_flush_init; > +}; > + > +static bool __stage2_unmap_defer_tlb_flush(struct kvm_pgtable *pgt) > +{ > + /* > + * If FEAT_TLBIRANGE is implemented, defer the individual > + * TLB invalidations until the entire walk is finished, and > + * then use the range-based TLBI instructions to do the > + * invalidations. Condition deferred TLB invalidation on the > + * system supporting FWB, as the optimization is entirely > + * pointless when the unmap walker needs to perform CMOs. > + */ > + return system_supports_tlb_range() && stage2_has_fwb(pgt); > +} > + > +static bool stage2_unmap_defer_tlb_flush(struct stage2_unmap_data *unmap_data) > +{ > + bool defer_tlb_flush = __stage2_unmap_defer_tlb_flush(unmap_data->pgt); > + > + /* > + * Since __stage2_unmap_defer_tlb_flush() is based on alternative > + * patching and the TLBIs' operations behavior depend on this, > + * track if there's any change in the state during the unmap sequence. > + */ > + WARN_ON(unmap_data->defer_tlb_flush_init != defer_tlb_flush); > + return defer_tlb_flush; > +} > + > +static void stage2_unmap_put_pte(const struct kvm_pgtable_visit_ctx *ctx, > + struct kvm_s2_mmu *mmu, > + struct kvm_pgtable_mm_ops *mm_ops) > { > + struct stage2_unmap_data *unmap_data = ctx->arg; > + > /* > - * Clear the existing PTE, and perform break-before-make with > - * TLB maintenance if it was valid. > + * Clear the existing PTE, and perform break-before-make if it was > + * valid. Depending on the system support, the TLB maintenance for > + * the same can be deferred until the entire unmap is completed. > */ > if (kvm_pte_valid(ctx->old)) { > kvm_clear_pte(ctx->ptep); > - kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, mmu, ctx->addr, ctx->level); > + > + if (!stage2_unmap_defer_tlb_flush(unmap_data)) Why not directly check (unmap_data->defer_tlb_flush_init) here?
> + kvm_call_hyp(__kvm_tlb_flush_vmid_ipa, mmu, > + ctx->addr, ctx->level); Small indent hint. The ctx->addr can align with __kvm_tlb_flush_vmid_ipa.
Thanks, Shaoqin > } > > mm_ops->put_page(ctx->ptep); > @@ -1070,7 +1108,8 @@ int kvm_pgtable_stage2_set_owner(struct kvm_pgtable *pgt, u64 addr, u64 size, > static int stage2_unmap_walker(const struct kvm_pgtable_visit_ctx *ctx, > enum kvm_pgtable_walk_flags visit) > { > - struct kvm_pgtable *pgt = ctx->arg; > + struct stage2_unmap_data *unmap_data = ctx->arg; > + struct kvm_pgtable *pgt = unmap_data->pgt; > struct kvm_s2_mmu *mmu = pgt->mmu; > struct kvm_pgtable_mm_ops *mm_ops = ctx->mm_ops; > kvm_pte_t *childp = NULL; > @@ -1098,7 +1137,7 @@ static int stage2_unmap_walker(const struct kvm_pgtable_visit_ctx *ctx, > * block entry and rely on the remaining portions being faulted > * back lazily. > */ > - stage2_put_pte(ctx, mmu, mm_ops); > + stage2_unmap_put_pte(ctx, mmu, mm_ops); > > if (need_flush && mm_ops->dcache_clean_inval_poc) > mm_ops->dcache_clean_inval_poc(kvm_pte_follow(ctx->old, mm_ops), > @@ -1112,13 +1151,23 @@ static int stage2_unmap_walker(const struct kvm_pgtable_visit_ctx *ctx, > > int kvm_pgtable_stage2_unmap(struct kvm_pgtable *pgt, u64 addr, u64 size) > { > + int ret; > + struct stage2_unmap_data unmap_data = { > + .pgt = pgt, > + .defer_tlb_flush_init = __stage2_unmap_defer_tlb_flush(pgt), > + }; > struct kvm_pgtable_walker walker = { > .cb = stage2_unmap_walker, > - .arg = pgt, > + .arg = &unmap_data, > .flags = KVM_PGTABLE_WALK_LEAF | KVM_PGTABLE_WALK_TABLE_POST, > }; > > - return kvm_pgtable_walk(pgt, addr, size, &walker); > + ret = kvm_pgtable_walk(pgt, addr, size, &walker); > + if (stage2_unmap_defer_tlb_flush(&unmap_data)) > + /* Perform the deferred TLB invalidations */ > + kvm_tlb_flush_vmid_range(pgt->mmu, addr, size); > + > + return ret; > } > > struct stage2_attr_data {
-- Shaoqin
| |