| From | Ben Hutchings <> | Date | Sun, 11 Nov 2018 19:49:05 +0000 | Subject | [PATCH 3.16 226/366] mm: hugetlb: yield when prepping struct pages |
| |
3.16.61-rc1 review patch. If anyone has any objections, please let me know.
------------------
From: Cannon Matthews <cannonmatthews@google.com>
commit 520495fe96d74e05db585fc748351e0504d8f40d upstream.
When booting with very large numbers of gigantic (i.e. 1G) pages, the operations in the loop of gather_bootmem_prealloc, and specifically prep_compound_gigantic_page, takes a very long time, and can cause a softlockup if enough pages are requested at boot.
For example booting with 3844 1G pages requires prepping (set_compound_head, init the count) over 1 billion 4K tail pages, which takes considerable time.
Add a cond_resched() to the outer loop in gather_bootmem_prealloc() to prevent this lockup.
Tested: Booted with softlockup_panic=1 hugepagesz=1G hugepages=3844 and no softlockup is reported, and the hugepages are reported as successfully setup.
Link: http://lkml.kernel.org/r/20180627214447.260804-1-cannonmatthews@google.com Signed-off-by: Cannon Matthews <cannonmatthews@google.com> Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com> Acked-by: Michal Hocko <mhocko@suse.com> Cc: Andres Lagar-Cavilla <andreslc@google.com> Cc: Peter Feiner <pfeiner@google.com> Cc: Greg Thelen <gthelen@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Ben Hutchings <ben@decadent.org.uk> --- mm/hugetlb.c | 1 + 1 file changed, 1 insertion(+)
--- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1546,6 +1546,7 @@ static void __init gather_bootmem_preall */ if (hstate_is_gigantic(h)) adjust_managed_page_count(page, 1 << h->order); + cond_resched(); } }
|