lkml.org 
[lkml]   [2019]   [Feb]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH] huegtlbfs: fix page leak during migration of file pages
Date
On Mon, Feb 11, 2019 at 03:06:27PM -0800, Mike Kravetz wrote:
> On 2/7/19 11:31 PM, Naoya Horiguchi wrote:
> > On Thu, Feb 07, 2019 at 09:50:30PM -0800, Mike Kravetz wrote:
> >> On 2/7/19 6:31 PM, Naoya Horiguchi wrote:
> >>> On Thu, Feb 07, 2019 at 10:50:55AM -0800, Mike Kravetz wrote:
> >>>> On 1/30/19 1:14 PM, Mike Kravetz wrote:
> >>>>> +++ b/fs/hugetlbfs/inode.c
> >>>>> @@ -859,6 +859,16 @@ static int hugetlbfs_migrate_page(struct address_space *mapping,
> >>>>> rc = migrate_huge_page_move_mapping(mapping, newpage, page);
> >>>>> if (rc != MIGRATEPAGE_SUCCESS)
> >>>>> return rc;
> >>>>> +
> >>>>> + /*
> >>>>> + * page_private is subpool pointer in hugetlb pages, transfer
> >>>>> + * if needed.
> >>>>> + */
> >>>>> + if (page_private(page) && !page_private(newpage)) {
> >>>>> + set_page_private(newpage, page_private(page));
> >>>>> + set_page_private(page, 0);
> >>>
> >>> You don't have to copy PagePrivate flag?
> >>>
> >>
> >> Well my original thought was no. For hugetlb pages, PagePrivate is not
> >> associated with page_private. It indicates a reservation was consumed.
> >> It is set when a hugetlb page is newly allocated and the allocation is
> >> associated with a reservation and the global reservation count is
> >> decremented. When the page is added to the page cache or rmap,
> >> PagePrivate is cleared. If the page is free'ed before being added to page
> >> cache or rmap, PagePrivate tells free_huge_page to restore (increment) the
> >> reserve count as we did not 'instantiate' the page.
> >>
> >> So, PagePrivate is only set from the time a huge page is allocated until
> >> it is added to page cache or rmap. My original thought was that the page
> >> could not be migrated during this time. However, I am not sure if that
> >> reasoning is correct. The page is not locked, so it would appear that it
> >> could be migrated? But, if it can be migrated at this time then perhaps
> >> there are bigger issues for the (hugetlb) page fault code?
> >
> > In my understanding, free hugetlb pages are not expected to be passed to
> > migrate_pages(), and currently that's ensured by each migration caller
> > which checks and avoids free hugetlb pages on its own.
> > migrate_pages() and its internal code are probably not aware of handling
> > free hugetlb pages, so if they are accidentally passed to migration code,
> > that's a big problem as you are concerned.
> > So the above reasoning should work at least this assumption is correct.
> >
> > Most of migration callers are not intersted in moving free hugepages.
> > The one I'm not sure of is the code path from alloc_contig_range().
> > If someone think it's worthwhile to migrate free hugepage to get bigger
> > contiguous memory, he/she tries to enable that code path and the assumption
> > will be broken.
>
> You are correct. We do not migrate free huge pages. I was thinking more
> about problems if we migrate a page while it is being added to a task's page
> table as in hugetlb_no_page.
>
> Commit bcc54222309c ("mm: hugetlb: introduce page_huge_active") addresses
> this issue, but I believe there is a bug in the implementation.
> isolate_huge_page contains this test:
>
> if (!page_huge_active(page) || !get_page_unless_zero(page)) {
> ret = false;
> goto unlock;
> }
>
> If the condition is not met, then the huge page can be isolated and migrated.
>
> In hugetlb_no_page, there is this block of code:
>
> page = alloc_huge_page(vma, haddr, 0);
> if (IS_ERR(page)) {
> ret = vmf_error(PTR_ERR(page));
> goto out;
> }
> clear_huge_page(page, address, pages_per_huge_page(h));
> __SetPageUptodate(page);
> set_page_huge_active(page);
>
> if (vma->vm_flags & VM_MAYSHARE) {
> int err = huge_add_to_page_cache(page, mapping, idx);
> if (err) {
> put_page(page);
> if (err == -EEXIST)
> goto retry;
> goto out;
> }
> } else {
> lock_page(page);
> if (unlikely(anon_vma_prepare(vma))) {
> ret = VM_FAULT_OOM;
> goto backout_unlocked;
> }
> anon_rmap = 1;
> }
> } else {
>
> Note that we call set_page_huge_active BEFORE locking the page. This
> means that we can isolate the page and have migration take place while
> we continue to add the page to page tables. I was able to make this
> happen by adding a udelay() after set_page_huge_active to simulate worst
> case scheduling behavior. It resulted in VM_BUG_ON while unlocking page.
> My test had several threads faulting in huge pages. Another thread was
> offlining the memory blocks forcing migration.

This shows another problem, so I agree we need a fix.

>
> To fix this, we need to delay the set_page_huge_active call until after
> the page is locked. I am testing a patch with this change. Perhaps we
> should even delay calling set_page_huge_active until we know there are
> no errors and we know the page is actually in page tables?

Yes, calling set_page_huge_active after page table is set up sounds nice to me.

>
> While looking at this, I think there is another issue. When a hugetlb
> page is migrated, we do not migrate the 'page_huge_active' state of the
> page. That should be moved as the page is migrated. Correct?

Yes, and I think that putback_active_hugepage(new_hpage) at the last step
of migration sequence handles the copying of 'page_huge_active' state.

Thanks,
Naoya Horiguchi

\
 
 \ /
  Last update: 2019-02-12 03:29    [W:0.087 / U:0.760 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site