lkml.org 
[lkml]   [2010]   [Mar]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [rfc 5/5] mincore: transparent huge page support
On Tue, Mar 23, 2010 at 03:35:02PM +0100, Johannes Weiner wrote:
> +static int mincore_huge_pmd(struct vm_area_struct *vma, pmd_t *pmd,
> + unsigned long addr, unsigned long end,
> + unsigned char *vec)
> +{
> + int huge = 0;
> +#ifdef CONFIG_TRANSPARENT_HUGEPAGE
> + spin_lock(&vma->vm_mm->page_table_lock);
> + if (likely(pmd_trans_huge(*pmd))) {
> + huge = !pmd_trans_splitting(*pmd);

Under mmap_sem (read or write) a hugepage can't materialize under
us. So here the pmd_trans_huge can be lockless and run _before_ taking
the page_table_lock. That's the invariant I used to keep identical
performance for all fast paths.

And if it wasn't the case it wouldn't be safe to return huge = 0 as
the page_table_lock is released at that point.

> + spin_unlock(&vma->vm_mm->page_table_lock);
> + /*
> + * If we have an intact huge pmd entry, all pages in
> + * the range are present in the mincore() sense of
> + * things.
> + *
> + * But if the entry is currently being split into
> + * normal page mappings, wait for it to finish and
> + * signal the fallback to ptes.
> + */
> + if (huge)
> + memset(vec, 1, (end - addr) >> PAGE_SHIFT);
> + else
> + wait_split_huge_page(vma->anon_vma, pmd);
> + } else
> + spin_unlock(&vma->vm_mm->page_table_lock);
> +#endif
> + return huge;
> +}
> +

It's probably cleaner to move the block into huge_memory.c and create
a dummy for the #ifndef version like I did for all the rest.


I'll incorporate and take care of those changes myself if you don't
mind, as I'm going to do a new submit for -mm. I greatly appreciated
you taken the time to port to transhuge it helps a lot! ;)

Thanks,
Andrea


\
 
 \ /
  Last update: 2010-03-24 23:51    [W:0.068 / U:0.312 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site