lkml.org 
[lkml]   [2022]   [Jan]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[RFC PATCH v3 3/8] mm: migrate: allocate the right size of non hugetlb or THP compound pages.
Date
From: Zi Yan <ziy@nvidia.com>

alloc_migration_target() is used by alloc_contig_range() and non-LRU
movable compound pages can be migrated. Current code does not allocate the
right page size for such pages. Check THP precisely using
is_transparent_huge() and add allocation support for non-LRU compound
pages.

Signed-off-by: Zi Yan <ziy@nvidia.com>
---
mm/migrate.c | 11 +++++++----
1 file changed, 7 insertions(+), 4 deletions(-)

diff --git a/mm/migrate.c b/mm/migrate.c
index c7da064b4781..b1851ffb8576 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -1546,9 +1546,7 @@ struct page *alloc_migration_target(struct page *page, unsigned long private)

gfp_mask = htlb_modify_alloc_mask(h, gfp_mask);
return alloc_huge_page_nodemask(h, nid, mtc->nmask, gfp_mask);
- }
-
- if (PageTransHuge(page)) {
+ } else if (is_transparent_hugepage(page)) {
/*
* clear __GFP_RECLAIM to make the migration callback
* consistent with regular THP allocations.
@@ -1556,14 +1554,19 @@ struct page *alloc_migration_target(struct page *page, unsigned long private)
gfp_mask &= ~__GFP_RECLAIM;
gfp_mask |= GFP_TRANSHUGE;
order = HPAGE_PMD_ORDER;
+ } else if (PageCompound(page)) {
+ /* for non-LRU movable compound pages */
+ gfp_mask |= __GFP_COMP;
+ order = compound_order(page);
}
+
zidx = zone_idx(page_zone(page));
if (is_highmem_idx(zidx) || zidx == ZONE_MOVABLE)
gfp_mask |= __GFP_HIGHMEM;

new_page = __alloc_pages(gfp_mask, order, nid, mtc->nmask);

- if (new_page && PageTransHuge(new_page))
+ if (new_page && is_transparent_hugepage(page))
prep_transhuge_page(new_page);

return new_page;
--
2.34.1
\
 
 \ /
  Last update: 2022-01-05 22:49    [W:0.209 / U:0.720 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site