lkml.org 
[lkml]   [2015]   [Nov]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH v1] mm: hugetlb: fix hugepage memory leak caused by wrong reserve count
Date
> 
> When dequeue_huge_page_vma() in alloc_huge_page() fails, we fall back to
> alloc_buddy_huge_page() to directly create a hugepage from the buddy allocator.
> In that case, however, if alloc_buddy_huge_page() succeeds we don't decrement
> h->resv_huge_pages, which means that successful hugetlb_fault() returns without
> releasing the reserve count. As a result, subsequent hugetlb_fault() might fail
> despite that there are still free hugepages.
>
> This patch simply adds decrementing code on that code path.
>
> I reproduced this problem when testing v4.3 kernel in the following situation:
> - the test machine/VM is a NUMA system,
> - hugepage overcommiting is enabled,
> - most of hugepages are allocated and there's only one free hugepage
> which is on node 0 (for example),
> - another program, which calls set_mempolicy(MPOL_BIND) to bind itself to
> node 1, tries to allocate a hugepage,
> - the allocation should fail but the reserve count is still hold.
>
> Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
> Cc: <stable@vger.kernel.org> [3.16+]
> ---
> - the reason why I set stable target to "3.16+" is that this patch can be
> applied easily/automatically on these versions. But this bug seems to be
> old one, so if you are interested in backporting to older kernels,
> please let me know.
> ---
> mm/hugetlb.c | 5 ++++-
> 1 files changed, 4 insertions(+), 1 deletions(-)
>
> diff --git v4.3/mm/hugetlb.c v4.3_patched/mm/hugetlb.c
> index 9cc7734..77c518c 100644
> --- v4.3/mm/hugetlb.c
> +++ v4.3_patched/mm/hugetlb.c
> @@ -1790,7 +1790,10 @@ struct page *alloc_huge_page(struct vm_area_struct *vma,
> page = alloc_buddy_huge_page(h, NUMA_NO_NODE);
> if (!page)
> goto out_uncharge_cgroup;
> -
> + if (!avoid_reserve && vma_has_reserves(vma, gbl_chg)) {
> + SetPagePrivate(page);
> + h->resv_huge_pages--;
> + }

I am wondering if this patch was prepared against the next tree.

> spin_lock(&hugetlb_lock);
> list_move(&page->lru, &h->hugepage_activelist);
> /* Fall through */
> --
> 1.7.1



\
 
 \ /
  Last update: 2015-11-20 09:21    [W:0.138 / U:0.424 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site