lkml.org 
[lkml]   [2024]   [Apr]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH 2/3] fixup! mm/gup: handle huge pmd for follow_pmd_mask()
Date
From: Peter Xu <peterx@redhat.com>

Allow follow_pmd_mask() to take hugetlb tail pages. The old warnings do
not help now as hugetlb now allows it to happen, so drop them.

Reported-by: Ryan Roberts <ryan.roberts@arm.com>
Signed-off-by: Peter Xu <peterx@redhat.com>
---
mm/gup.c | 3 ---
1 file changed, 3 deletions(-)

diff --git a/mm/gup.c b/mm/gup.c
index 91d70057aea0..d60b63fcfc82 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -775,8 +775,6 @@ static struct page *follow_huge_pmd(struct vm_area_struct *vma,
assert_spin_locked(pmd_lockptr(mm, pmd));

page = pmd_page(pmdval);
- VM_BUG_ON_PAGE(!PageHead(page) && !is_zone_device_page(page), page);
-
if ((flags & FOLL_WRITE) &&
!can_follow_write_pmd(pmdval, page, vma, flags))
return NULL;
@@ -805,7 +803,6 @@ static struct page *follow_huge_pmd(struct vm_area_struct *vma,

page += (addr & ~HPAGE_PMD_MASK) >> PAGE_SHIFT;
ctx->page_mask = HPAGE_PMD_NR - 1;
- VM_BUG_ON_PAGE(!PageCompound(page) && !is_zone_device_page(page), page);

return page;
}
--
2.44.0

\
 
 \ /
  Last update: 2024-05-27 16:20    [W:0.081 / U:0.708 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site