lkml.org 
[lkml]   [2023]   [Apr]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH v2 4/7] KVM: arm64: Implement kvm_arch_flush_remote_tlbs_range()
On Wed, Mar 29, 2023 at 5:53 PM Oliver Upton <oliver.upton@linux.dev> wrote:
>
> On Mon, Feb 06, 2023 at 05:23:37PM +0000, Raghavendra Rao Ananta wrote:
> > Implement kvm_arch_flush_remote_tlbs_range() for arm64,
> > such that it can utilize the TLBI range based instructions
> > if supported.
> >
> > Signed-off-by: Raghavendra Rao Ananta <rananta@google.com>
> > ---
> > arch/arm64/include/asm/kvm_host.h | 3 +++
> > arch/arm64/kvm/mmu.c | 15 +++++++++++++++
> > 2 files changed, 18 insertions(+)
> >
> > diff --git a/arch/arm64/include/asm/kvm_host.h b/arch/arm64/include/asm/kvm_host.h
> > index dee530d75b957..211fab0c1de74 100644
> > --- a/arch/arm64/include/asm/kvm_host.h
> > +++ b/arch/arm64/include/asm/kvm_host.h
> > @@ -1002,6 +1002,9 @@ struct kvm *kvm_arch_alloc_vm(void);
> > #define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS
> > int kvm_arch_flush_remote_tlbs(struct kvm *kvm);
> >
> > +#define __KVM_HAVE_ARCH_FLUSH_REMOTE_TLBS_RANGE
> > +int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pages);
> > +
> > static inline bool kvm_vm_is_protected(struct kvm *kvm)
> > {
> > return false;
> > diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> > index e98910a8d0af6..409cb187f4911 100644
> > --- a/arch/arm64/kvm/mmu.c
> > +++ b/arch/arm64/kvm/mmu.c
> > @@ -91,6 +91,21 @@ int kvm_arch_flush_remote_tlbs(struct kvm *kvm)
> > return 0;
> > }
> >
> > +int kvm_arch_flush_remote_tlbs_range(struct kvm *kvm, gfn_t start_gfn, u64 pages)
> > +{
> > + phys_addr_t start, end;
> > +
> > + if (!system_supports_tlb_range())
> > + return -EOPNOTSUPP;
>
> There's multiple layers of fallback throughout this series, as it would
> appear that deep in __kvm_tlb_flush_range() you're blasting the whole
> VMID if either the range is too large or the feature isn't supported.
>
> Is it possible to just normalize on a single spot to gate the use of
> range-based invalidations? I have a slight preference for doing it deep
> in the handler, as it keeps the upper layers of code a bit more
> readable.
>
I was a little skeptical on this part, since the
kvm_arch_flush_remote_tlbs_range() expects to return -EOPNOTSUPP if
indeed there's no support.
But I see your point. The if-else in kvm_pgtable_stage2_flush_range()
seems redundant and I can simply manage this conditions inside
__kvm_tlb_flush_range_vmid_ipa() itself, but I'll leave the
kvm_arch_flush_remote_tlbs_range()'s implementation as is. Thoughts?

Thank you.
Raghavendra


> > + start = start_gfn << PAGE_SHIFT;
> > + end = (start_gfn + pages) << PAGE_SHIFT;
> > +
> > + kvm_call_hyp(__kvm_tlb_flush_range_vmid_ipa, &kvm->arch.mmu,
> > + start, end, KVM_PGTABLE_MAX_LEVELS - 1, 0);
> > + return 0;
> > +}
> > +
> > static bool kvm_is_device_pfn(unsigned long pfn)
> > {
> > return !pfn_is_map_memory(pfn);
> > --
> > 2.39.1.519.gcb327c4b5f-goog
> >
> >
>
> --
> Thanks,
> Oliver

\
 
 \ /
  Last update: 2023-04-03 23:24    [W:0.116 / U:1.044 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site