lkml.org 
[lkml]   [2023]   [Mar]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v2 0/3] userfaultfd: convert userfaultfd functions to use folios
On 2023/3/14 21:23, Matthew Wilcox wrote:

> On Tue, Mar 14, 2023 at 01:13:47PM +0000, Peng Zhang wrote:
>> From: ZhangPeng<zhangpeng362@huawei.com>
>>
>> This patch series converts several userfaultfd functions to use folios.
>> And this series pass the userfaultfd selftests and the LTP userfaultfd
>> test cases.
> That's what you said about the earlier patchset too. Assuming you
> ran the tests, they need to be improved to fid the bug that was in the
> earlier version of the patches.

I did run the tests both times before sending the patches. However, the
bug in the earlier version patches[1] is a hard corner case[2] to
trigger. To trigger it, we need to call copy_large_folio_from_user()
with allow_pagefault == true, which requires hugetlb_mcopy_atomic_pte()
to return -ENOENT. This means that calling copy_large_folio_from_user()
with allow_pagefault == false failed, i.e. copy_from_user() failed.
Building a self-test that copy_from_user() fails could be difficult.

__mcopy_atomic()
__mcopy_atomic_hugetlb()
hugetlb_mcopy_atomic_pte()
copy_large_folio_from_user(..., ..., false);
// if ret_val > 0, return -ENOENT
copy_from_user()
// copy_from_user() needs to fail
if (err == -ENOENT) copy_large_folio_from_user(..., ..., true);


[1] https://lore.kernel.org/all/20230314033734.481904-3-zhangpeng362@huawei.com/

> -long copy_huge_page_from_user(struct page *dst_page,
> +long copy_large_folio_from_user(struct folio *dst_folio,
> const void __user *usr_src,
> - unsigned int pages_per_huge_page,
> bool allow_pagefault)
> {
> void *page_kaddr;
> unsigned long i, rc = 0;
> - unsigned long ret_val = pages_per_huge_page * PAGE_SIZE;
> + unsigned int nr_pages = folio_nr_pages(dst_folio);
> + unsigned long ret_val = nr_pages * PAGE_SIZE;
> struct page *subpage;
> + struct folio *inner_folio;
>
> - for (i = 0; i < pages_per_huge_page; i++) {
> - subpage = nth_page(dst_page, i);
> + for (i = 0; i < nr_pages; i++) {
> + subpage = folio_page(dst_folio, i);
> + inner_folio = page_folio(subpage);
> if (allow_pagefault)
> - page_kaddr = kmap(subpage);
> + page_kaddr = kmap_local_folio(inner_folio, 0);
> else
> page_kaddr = kmap_atomic(subpage);
> rc = copy_from_user(page_kaddr,
> usr_src + i * PAGE_SIZE, PAGE_SIZE);
> if (allow_pagefault)
> - kunmap(subpage);
> + kunmap_local(page_kaddr);
> else
> kunmap_atomic(page_kaddr);


Thanks,
Peng.

\
 
 \ /
  Last update: 2023-03-27 01:13    [W:0.058 / U:0.792 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site