lkml.org 
[lkml]   [2021]   [Feb]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH 03/15] KVM: selftests: Align HVA for HugeTLB-backed memslots
From
Date
Hi Sean,

On 2021/2/11 7:06, Sean Christopherson wrote:
> Align the HVA for HugeTLB memslots, not just THP memslots. Add an
> assert so any future backing types are forced to assess whether or not
> they need to be aligned.
>
> Cc: Ben Gardon <bgardon@google.com>
> Cc: Yanan Wang <wangyanan55@huawei.com>
> Cc: Andrew Jones <drjones@redhat.com>
> Cc: Peter Xu <peterx@redhat.com>
> Cc: Aaron Lewis <aaronlewis@google.com>
> Signed-off-by: Sean Christopherson <seanjc@google.com>
> ---
> tools/testing/selftests/kvm/lib/kvm_util.c | 5 ++++-
> 1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
> index 584167c6dbc7..deaeb47b5a6d 100644
> --- a/tools/testing/selftests/kvm/lib/kvm_util.c
> +++ b/tools/testing/selftests/kvm/lib/kvm_util.c
> @@ -731,8 +731,11 @@ void vm_userspace_mem_region_add(struct kvm_vm *vm,
> alignment = 1;
> #endif
>
> - if (src_type == VM_MEM_SRC_ANONYMOUS_THP)
> + if (src_type == VM_MEM_SRC_ANONYMOUS_THP ||
> + src_type == VM_MEM_SRC_ANONYMOUS_HUGETLB)
Sorry for the late reply, I just returned from vacation.
I am not sure HVA alignment is really necessary here for hugetlb pages.
Different from hugetlb pages,
the THP pages are dynamically allocated by later madvise(), so the value
of HVA returned from mmap()
is host page size aligned but not THP page size aligned, so we indeed
have to perform alignment.
But hugetlb pages are pre-allocated on systems. The following test
results also indicate that,
with MAP_HUGETLB flag, the HVA returned from mmap() is already aligned
to the corresponding
hugetlb page size. So maybe HVAs of each hugetlb pages are aligned
during allocation of them
or in mmap() ? If so, I think we better not do this again here, because
the later *region->mmap_size += alignment*
will cause one more hugetlb page mapped but will not be used.

cmdline: ./kvm_page_table_test -m 4 -b 1G -s anonymous_hugetlb_1gb
some outputs:
Host  virtual  test memory offset: 0xffff40000000
Host  virtual  test memory offset: 0xffff00000000
Host  virtual  test memory offset: 0x400000000000

cmdline: ./kvm_page_table_test -m 4 -b 1G -s anonymous_hugetlb_2mb
some outputs:
Host  virtual  test memory offset: 0xffff48000000
Host  virtual  test memory offset: 0xffff65400000
Host  virtual  test memory offset: 0xffff6ba00000

cmdline: ./kvm_page_table_test -m 4 -b 1G -s anonymous_hugetlb_32mb
some outputs:
Host  virtual  test memory offset: 0xffff70000000
Host  virtual  test memory offset: 0xffff4c000000
Host  virtual  test memory offset: 0xffff72000000

cmdline: ./kvm_page_table_test -m 4 -b 1G -s anonymous_hugetlb_64kb
some outputs:
Host  virtual  test memory offset: 0xffff58230000
Host  virtual  test memory offset: 0xffff6ef00000
Host  virtual  test memory offset: 0xffff7c150000

Thanks,
Yanan
> alignment = max(huge_page_size, alignment);
> + else
> + ASSERT_EQ(src_type, VM_MEM_SRC_ANONYMOUS);
>
> /* Add enough memory to align up if necessary */
> if (alignment > 1)

\
 
 \ /
  Last update: 2021-02-25 08:44    [W:0.494 / U:1.244 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site