Messages in this thread Patch in this message | | | From | Wei Yang <> | Subject | [Patch v2 7/7] mm/hugetlb: narrow the hugetlb_lock protection area during preparing huge page | Date | Fri, 28 Aug 2020 11:32:42 +0800 |
| |
set_hugetlb_cgroup_[rsvd] just manipulate page local data, which is not necessary to be protected by hugetlb_lock.
Let's take this out.
Signed-off-by: Wei Yang <richard.weiyang@linux.alibaba.com> Reviewed-by: Baoquan He <bhe@redhat.com> Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com> --- mm/hugetlb.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 6ad365dd1e96..ae840dc09197 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -1492,9 +1492,9 @@ static void prep_new_huge_page(struct hstate *h, struct page *page, int nid) { INIT_LIST_HEAD(&page->lru); set_compound_page_dtor(page, HUGETLB_PAGE_DTOR); - spin_lock(&hugetlb_lock); set_hugetlb_cgroup(page, NULL); set_hugetlb_cgroup_rsvd(page, NULL); + spin_lock(&hugetlb_lock); h->nr_huge_pages++; h->nr_huge_pages_node[nid]++; spin_unlock(&hugetlb_lock); -- 2.20.1 (Apple Git-117)
| |