lkml.org 
[lkml]   [2008]   [May]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH] x86: fix PAE pmd_bad bootup warning
    On 08.05.2008 [18:51:11 +0200], Hans Rosenfeld wrote:
    > On Thu, May 08, 2008 at 09:33:52AM -0700, Nishanth Aravamudan wrote:
    > > So this seems to lend credence to Dave's hypothesis. Without, as you
    > > were trying before, teaching pagemap all about hugepages, what are our
    > > options?
    > >
    > > Can we just skip over the current iteration of the PMD loop (would we
    > > need something similar for the PTE loop for power?) if pmd_huge(pmd)?
    >
    > Allowing huge pages in the page walker would affect both walk_pmd_range
    > and walk_pud_range. Then either the users of the page walker need to
    > know how to handle huge pages themselves (in the pmd_entry and pud_entry
    > callback functions), or the page walker treats huge pages as any other
    > pages (calling the pte_entry callback function).

    Right, I agree *if* we allow huge pages in the walker. But AIUI, things
    are broken now with hugepages in the process' address space. This is a
    bug upstream and leads to hugepages leaking out of the kernel when
    /proc/pid/pagemap is read. Why not, instead (as a short-term fix), skip
    hugepage mappings altogether in the page-walker code?

    Hrm, upon further investigation, this seems to be a pretty clear
    limitation of walk_page_range(). One that is avoided in the other two
    callers, i.e.

    static int show_smap(struct seq_file *m, void *v)
    {
    ...
    if (vma->vm_mm && !is_vm_hugetlb_page(vma))
    walk_page_range(vma->vm_mm, vma->vm_start, vma->vm_end,
    &smaps_walk, &mss);
    ...
    }

    static ssize_t clear_refs_write(struct file *file, const char __user *buf,
    size_t count, loff_t *ppos)
    {
    ...
    for (vma = mm->mmap; vma; vma = vma->vm_next)
    if (!is_vm_hugetlb_page(vma))
    walk_page_range(mm, vma->vm_start, vma->vm_end,
    &clear_refs_walk, vma);
    ...
    }

    No such protection exists for

    static ssize_t pagemap_read(struct file *file, char __user *buf,
    size_t count, loff_t *ppos);

    So, is there any way to either add a is_vm_hugetlb_page(vma) check into
    pagemap_read()? Or can we modify walk_page_range to take the a vma and
    skip the walking if is_vm_hugetlb_page(vma) is set [to avoid
    complications down the road until hugepage walking is fixed]. I guess
    the latter isn't possible for pagemap_read(), since we are just looking
    at arbitrary addresses in the process space?

    Dunno, seems quite clear that the bug is in pagemap_read(), not any
    hugepage code, and that the simplest fix is to make pagemap_read() do
    what the other walker-callers do, and skip hugepage regions.

    Thanks,
    Nish

    --
    Nishanth Aravamudan <nacc@us.ibm.com>
    IBM Linux Technology Center


    \
     
     \ /
      Last update: 2008-05-08 19:19    [W:3.677 / U:0.004 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site