lkml.org 
[lkml]   [2019]   [Jun]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH v2 2/2] mm: hugetlb: soft-offline: dissolve_free_huge_page() return zero on !PageHuge
Date
On Tue, Jun 11, 2019 at 10:16:03AM -0700, Mike Kravetz wrote:
> On 6/10/19 1:18 AM, Naoya Horiguchi wrote:
> > madvise(MADV_SOFT_OFFLINE) often returns -EBUSY when calling soft offline
> > for hugepages with overcommitting enabled. That was caused by the suboptimal
> > code in current soft-offline code. See the following part:
> >
> > ret = migrate_pages(&pagelist, new_page, NULL, MPOL_MF_MOVE_ALL,
> > MIGRATE_SYNC, MR_MEMORY_FAILURE);
> > if (ret) {
> > ...
> > } else {
> > /*
> > * We set PG_hwpoison only when the migration source hugepage
> > * was successfully dissolved, because otherwise hwpoisoned
> > * hugepage remains on free hugepage list, then userspace will
> > * find it as SIGBUS by allocation failure. That's not expected
> > * in soft-offlining.
> > */
> > ret = dissolve_free_huge_page(page);
> > if (!ret) {
> > if (set_hwpoison_free_buddy_page(page))
> > num_poisoned_pages_inc();
> > }
> > }
> > return ret;
> >
> > Here dissolve_free_huge_page() returns -EBUSY if the migration source page
> > was freed into buddy in migrate_pages(), but even in that case we actually
> > has a chance that set_hwpoison_free_buddy_page() succeeds. So that means
> > current code gives up offlining too early now.
> >
> > dissolve_free_huge_page() checks that a given hugepage is suitable for
> > dissolving, where we should return success for !PageHuge() case because
> > the given hugepage is considered as already dissolved.
> >
> > This change also affects other callers of dissolve_free_huge_page(),
> > which are cleaned up together.
> >
> > Reported-by: Chen, Jerry T <jerry.t.chen@intel.com>
> > Tested-by: Chen, Jerry T <jerry.t.chen@intel.com>
> > Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
> > Fixes: 6bc9b56433b76 ("mm: fix race on soft-offlining")
> > Cc: <stable@vger.kernel.org> # v4.19+
> > ---
> > mm/hugetlb.c | 15 +++++++++------
> > mm/memory-failure.c | 5 +----
> > 2 files changed, 10 insertions(+), 10 deletions(-)
> >
> > diff --git v5.2-rc3/mm/hugetlb.c v5.2-rc3_patched/mm/hugetlb.c
> > index ac843d3..048d071 100644
> > --- v5.2-rc3/mm/hugetlb.c
> > +++ v5.2-rc3_patched/mm/hugetlb.c
> > @@ -1519,7 +1519,12 @@ int dissolve_free_huge_page(struct page *page)
>
> Please update the function description for dissolve_free_huge_page() as
> well. It currently says, "Returns -EBUSY if the dissolution fails because
> a give page is not a free hugepage" which is no longer true as a result of
> this change.

Thanks for pointing out, I completely missed that.

>
> > int rc = -EBUSY;
> >
> > spin_lock(&hugetlb_lock);
> > - if (PageHuge(page) && !page_count(page)) {
> > + if (!PageHuge(page)) {
> > + rc = 0;
> > + goto out;
> > + }
> > +
> > + if (!page_count(page)) {
> > struct page *head = compound_head(page);
> > struct hstate *h = page_hstate(head);
> > int nid = page_to_nid(head);
> > @@ -1564,11 +1569,9 @@ int dissolve_free_huge_pages(unsigned long start_pfn, unsigned long end_pfn)
> >
> > for (pfn = start_pfn; pfn < end_pfn; pfn += 1 << minimum_order) {
> > page = pfn_to_page(pfn);
> > - if (PageHuge(page) && !page_count(page)) {
> > - rc = dissolve_free_huge_page(page);
> > - if (rc)
> > - break;
> > - }
>
> We may want to consider keeping at least the PageHuge(page) check before
> calling dissolve_free_huge_page(). dissolve_free_huge_pages is called as
> part of memory offline processing. We do not know if the memory to be offlined
> contains huge pages or not. With your changes, we are taking hugetlb_lock
> on each call to dissolve_free_huge_page just to discover that the page is
> not a huge page.
>
> You 'could' add a PageHuge(page) check to dissolve_free_huge_page before
> taking the lock. However, you would need to check again after taking the
> lock.

Right, I'll do this.

What was in my mind when writing this was that I actually don't like
PageHuge because it's slow (not inlined) and called anywhere in mm code,
so I like to reduce it if possible.
But I now see that dissolve_free_huge_page() are relatively rare event
rather than hugepage allocation/free, so dissolve_free_huge_page should take
burden to precheck PageHuge instead of speculatively taking hugetlb_lock
and disrupting the hot path.

Thanks,
- Naoya

\
 
 \ /
  Last update: 2019-06-12 09:26    [W:0.058 / U:67.004 seconds]
©2003-2018 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site