lkml.org 
[lkml]   [2005]   [Oct]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH 2/3] hugetlb: Demand fault handler
From
Date
Thanks for the review and comments...

On Wed, 2005-10-12 at 16:09 +1000, David Gibson wrote:
> On Tue, Oct 11, 2005 at 01:32:38PM -0500, Adam Litke wrote:
> > Version 5 (Tue, 11 Oct 2005)
> > Deal with hugetlbfs file truncation in find_get_huge_page()
> > Version 4 (Mon, 03 Oct 2005)
> > Make find_get_huge_page bale properly when add_to_page_cache fails
> > due to OOM conditions
> > Version 3 (Thu, 08 Sep 2005)
> > Organized logic in hugetlb_pte_fault() by breaking out
> > find_get_page/alloc_huge_page logic into separate function
> > Removed a few more paranoid checks ( Thanks )
> > Fixed tlb flushing in a race case ( Yanmin Zhang )
> >
> > Version 2 (Wed, 17 Aug 2005)
> > Removed spurious WARN_ON()
> > Patches added earlier in the series (now in mainline):
> > Check for p?d_none() in arch/i386/mm/hugetlbpage.c:huge_pte_offset()
> > Move i386 stale pte check into huge_pte_alloc()
>
> I'm not sure this does fully deal with truncation, I'm afraid - it
> will deal with a truncation well before the fault, but not a
> concurrent truncate(). We'll need the truncate_count/retry logic from
> do_no_page, I think. Andi/Hugh, can you confirm that's correct?

Ok. I can see why we need that.

> > Initial Post (Fri, 05 Aug 2005)
> >
> > Below is a patch to implement demand faulting for huge pages. The main
> > motivation for changing from prefaulting to demand faulting is so that
> > huge page memory areas can be allocated according to NUMA policy.
> >
> > Thanks to consolidated hugetlb code, switching the behavior requires changing
> > only one fault handler. The bulk of the patch just moves the logic from
> > hugelb_prefault() to hugetlb_pte_fault() and find_get_huge_page().
>
> While we're at it - it's a minor nit, but I find the distinction
> between hugetlb_pte_fault() and hugetlb_fault() confusing. A better
> name for the former would be hugetlb_no_page(), in which case we
> should probably also move the border between it and
> hugetlb_find_get_page() to match the boundary between do_no_page() and
> mapping->nopage.
>
> How about this, for example:

Yeah, I suppose that division makes more sense when comparing to the
normal fault handler code.

> @@ -338,57 +337,128 @@
> spin_unlock(&mm->page_table_lock);
> }
>
> -int hugetlb_prefault(struct address_space *mapping, struct vm_area_struct *vma)
> +static struct page *hugetlbfs_nopage(struct vm_area_struct *vma,

<snip>

> + /* Check to make sure the mapping hasn't been truncated */
> + size = i_size_read(inode) >> HPAGE_SHIFT;
> + if (pgoff >= size)
> + return NULL;
> +
> + retry:
> + page = find_get_page(mapping, pgoff);
> + if (page)
> + /* Another thread won the race */
> + return page;

Both of those returns could be changed to goto out so that the function
has only one exit path. Isn't that what we want?

--
Adam Litke - (agl at us.ibm.com)
IBM Linux Technology Center

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-10-13 17:52    [W:0.085 / U:0.396 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site