lkml.org 
[lkml]   [2019]   [Feb]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v4] mm/hugetlb: Fix unsigned overflow in __nr_hugepages_store_common()
On Tue, 26 Feb 2019, Jing Xiangfeng wrote:

> On 2019/2/26 3:17, David Rientjes wrote:
> > On Mon, 25 Feb 2019, Mike Kravetz wrote:
> >
> >> Ok, what about just moving the calculation/check inside the lock as in the
> >> untested patch below?
> >>
> >> Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
> >> ---
> >> mm/hugetlb.c | 34 ++++++++++++++++++++++++++--------
> >> 1 file changed, 26 insertions(+), 8 deletions(-)
> >>
> >> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> >> index 1c5219193b9e..5afa77dc7bc8 100644
> >> --- a/mm/hugetlb.c
> >> +++ b/mm/hugetlb.c
> >> @@ -2274,7 +2274,7 @@ static int adjust_pool_surplus(struct hstate *h,
> >> nodemask_t *nodes_allowed,
> >> }
> >>
> >> #define persistent_huge_pages(h) (h->nr_huge_pages - h->surplus_huge_pages)
> >> -static int set_max_huge_pages(struct hstate *h, unsigned long count,
> >> +static int set_max_huge_pages(struct hstate *h, unsigned long count, int nid,
> >> nodemask_t *nodes_allowed)
> >> {
> >> unsigned long min_count, ret;
> >> @@ -2289,6 +2289,23 @@ static int set_max_huge_pages(struct hstate *h, unsigned
> >> long count,
> >> goto decrease_pool;
> >> }
> >>
> >> + spin_lock(&hugetlb_lock);
> >> +
> >> + /*
> >> + * Check for a node specific request. Adjust global count, but
> >> + * restrict alloc/free to the specified node.
> >> + */
> >> + if (nid != NUMA_NO_NODE) {
> >> + unsigned long old_count = count;
> >> + count += h->nr_huge_pages - h->nr_huge_pages_node[nid];
> >> + /*
> >> + * If user specified count causes overflow, set to
> >> + * largest possible value.
> >> + */
> >> + if (count < old_count)
> >> + count = ULONG_MAX;
> >> + }
> >> +
> >> /*
> >> * Increase the pool size
> >> * First take pages out of surplus state. Then make up the
> >> @@ -2300,7 +2317,6 @@ static int set_max_huge_pages(struct hstate *h, unsigned
> >> long count,
> >> * pool might be one hugepage larger than it needs to be, but
> >> * within all the constraints specified by the sysctls.
> >> */
> >> - spin_lock(&hugetlb_lock);
> >> while (h->surplus_huge_pages && count > persistent_huge_pages(h)) {
> >> if (!adjust_pool_surplus(h, nodes_allowed, -1))
> >> break;
> >> @@ -2421,16 +2437,18 @@ static ssize_t __nr_hugepages_store_common(bool
> >> obey_mempolicy,
> >> nodes_allowed = &node_states[N_MEMORY];
> >> }
> >> } else if (nodes_allowed) {
> >> + /* Node specific request */
> >> + init_nodemask_of_node(nodes_allowed, nid);
> >> + } else {
> >> /*
> >> - * per node hstate attribute: adjust count to global,
> >> - * but restrict alloc/free to the specified node.
> >> + * Node specific request, but we could not allocate
> >> + * node mask. Pass in ALL nodes, and clear nid.
> >> */
> >> - count += h->nr_huge_pages - h->nr_huge_pages_node[nid];
> >> - init_nodemask_of_node(nodes_allowed, nid);
> >> - } else
> >> + nid = NUMA_NO_NODE;
> >> nodes_allowed = &node_states[N_MEMORY];
> >> + }
> >>
> >> - err = set_max_huge_pages(h, count, nodes_allowed);
> >> + err = set_max_huge_pages(h, count, nid, nodes_allowed);
> >> if (err)
> >> goto out;
> >>
> >
> > Looks good; Jing, could you test that this fixes your case?
>
> Yes, I have tested this patch, it can also fix my case.

Great!

Reported-by: Jing Xiangfeng <jingxiangfeng@huawei.com>
Tested-by: Jing Xiangfeng <jingxiangfeng@huawei.com>
Acked-by: David Rientjes <rientjes@google.com>

\
 
 \ /
  Last update: 2019-02-26 07:21    [W:0.189 / U:0.720 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site