lkml.org 
[lkml]   [2013]   [May]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] mm/pagewalk.c: walk_page_range should avoid VM_PFNMAP areas
On Wed, May 01, 2013 at 08:47:02AM -0700, David Rientjes wrote:
> On Wed, 1 May 2013, Cliff Wickman wrote:
>
> > Index: linux/mm/pagewalk.c
> > ===================================================================
> > --- linux.orig/mm/pagewalk.c
> > +++ linux/mm/pagewalk.c
> > @@ -127,22 +127,6 @@ static int walk_hugetlb_range(struct vm_
> > return 0;
> > }
> >
> > -static struct vm_area_struct* hugetlb_vma(unsigned long addr, struct mm_walk *walk)
> > -{
> > - struct vm_area_struct *vma;
> > -
> > - /* We don't need vma lookup at all. */
> > - if (!walk->hugetlb_entry)
> > - return NULL;
> > -
> > - VM_BUG_ON(!rwsem_is_locked(&walk->mm->mmap_sem));
> > - vma = find_vma(walk->mm, addr);
> > - if (vma && vma->vm_start <= addr && is_vm_hugetlb_page(vma))
> > - return vma;
> > -
> > - return NULL;
> > -}
> > -
> > #else /* CONFIG_HUGETLB_PAGE */
> > static struct vm_area_struct* hugetlb_vma(unsigned long addr, struct mm_walk *walk)
> > {
> > @@ -200,28 +184,46 @@ int walk_page_range(unsigned long addr,
> >
> > pgd = pgd_offset(walk->mm, addr);
> > do {
> > - struct vm_area_struct *vma;
> > + struct vm_area_struct *vma = NULL;
> >
> > next = pgd_addr_end(addr, end);
> >
> > /*
> > - * handle hugetlb vma individually because pagetable walk for
> > - * the hugetlb page is dependent on the architecture and
> > - * we can't handled it in the same manner as non-huge pages.
> > + * Check any special vma's within this range.
> > */
> > - vma = hugetlb_vma(addr, walk);
> > + VM_BUG_ON(!rwsem_is_locked(&walk->mm->mmap_sem));
>
> I think this should be moved out of the iteration. It's currently inside
> it even before your patch, but I think it's pointless.

I don't follow. We are iterating through a range of addresses. When
we come to a range that is VM_PFNMAP we skip it. How can we take that
out of the iteration?

> > + vma = find_vma(walk->mm, addr);
> > if (vma) {
> > - if (vma->vm_end < next)
> > + /*
> > + * There are no page structures backing a VM_PFNMAP
> > + * range, so allow no split_huge_page_pmd().
> > + */
> > + if (vma->vm_flags & VM_PFNMAP) {
> > next = vma->vm_end;
> > + pgd = pgd_offset(walk->mm, next);
> > + continue;
> > + }
>
> What if end < vma->vm_end?

Yes, a bad omission. Thanks for pointing that out.
It should be if ((vma->vm_start <= addr) && (vma->vm_flags & VM_PFNMAP))
as find_vma can return a vma above the addr.

-Cliff
> > /*
> > - * Hugepage is very tightly coupled with vma, so
> > - * walk through hugetlb entries within a given vma.
> > + * Handle hugetlb vma individually because pagetable
> > + * walk for the hugetlb page is dependent on the
> > + * architecture and we can't handled it in the same
> > + * manner as non-huge pages.
> > */
> > - err = walk_hugetlb_range(vma, addr, next, walk);
> > - if (err)
> > - break;
> > - pgd = pgd_offset(walk->mm, next);
> > - continue;
> > + if (walk->hugetlb_entry && (vma->vm_start <= addr) &&
> > + is_vm_hugetlb_page(vma)) {
> > + if (vma->vm_end < next)
> > + next = vma->vm_end;
> > + /*
> > + * Hugepage is very tightly coupled with vma,
> > + * so walk through hugetlb entries within a
> > + * given vma.
> > + */
> > + err = walk_hugetlb_range(vma, addr, next, walk);
> > + if (err)
> > + break;
> > + pgd = pgd_offset(walk->mm, next);
> > + continue;
> > + }
> > }
> >
> > if (pgd_none_or_clear_bad(pgd)) {

--
Cliff Wickman
SGI
cpw@sgi.com
(651) 683-3824


\
 
 \ /
  Last update: 2013-05-01 22:01    [W:0.944 / U:0.116 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site