lkml.org 
[lkml]   [2023]   [Jun]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] udmabuf: revert 'Add support for mapping hugepages (v4)'

>> Skimming over at shmem_read_mapping_page() users, I assume most of
>> them
>> use a VM_PFNMAP mapping (or don't mmap them at all), where we won't be
>> messing with the struct page at all.
>>
>> (That might even allow you to mmap hugetlb sub-pages, because the struct
>> page -- and mapcount -- will be ignored completely and not touched.)
> Oh, are you suggesting that if we do vma->vm_flags |= VM_PFNMAP
> in the mmap handler (mmap_udmabuf) and also do
> vmf_insert_pfn(vma, vmf->address, page_to_pfn(page))
> instead of
> vmf->page = ubuf->pages[pgoff];
> get_page(vmf->page);
>
> in the vma fault handler (udmabuf_vm_fault), we can avoid most of the
> pitfalls you have identified -- including with the usage of hugetlb subpages?

Yes, that's my thinking, but I have to do my homework first to see if
that would really work for hugetlb.

The thing is, I kind-of consider what udmabuf does a layer violation: we
have a filesystem (shmem/hugetlb) that should handle mappings to user
space. Yet, a driver decides to bypass that and simply map the pages
ordinarily to user space. (revealed by the fact that hugetlb does never
map sub-pages but udmabuf decides to do so)

In an ideal world everybody would simply mmap() the original memfd, but
thinking about offset+size configuration within the memfd that might not
always be desirable. As a workaround, we could mmap() only the PFNs,
leaving the struct page unaffected.

I'll have to look closer into that.

--
Cheers,

David / dhildenb

\
 
 \ /
  Last update: 2023-06-15 11:49    [W:0.090 / U:0.244 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site