lkml.org 
[lkml]   [2020]   [Dec]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH 1/2] mm: Allow architectures to request 'old' entries when prefaulting
On Mon, Dec 28, 2020 at 10:47:36AM -0800, Linus Torvalds wrote:
> On Mon, Dec 28, 2020 at 4:53 AM Kirill A. Shutemov <kirill@shutemov.name> wrote:
> >
> > So far I only found one more pin leak and always-true check. I don't see
> > how can it lead to crash or corruption. Keep looking.
>
> Well, I noticed that the nommu.c version of filemap_map_pages() needs
> fixing, but that's obviously not the case Hugh sees.
>
> No,m I think the problem is the
>
> pte_unmap_unlock(vmf->pte, vmf->ptl);
>
> at the end of filemap_map_pages().
>
> Why?
>
> Because we've been updating vmf->pte as we go along:
>
> vmf->pte += xas.xa_index - last_pgoff;
>
> and I think that by the time we get to that "pte_unmap_unlock()",
> vmf->pte potentially points to past the edge of the page directory.

Well, if it's true we have bigger problem: we set up an pte entry without
relevant PTL.

But I *think* we should be fine here: do_fault_around() limits start_pgoff
and end_pgoff to stay within the page table.

It made mw looking at the code around pte_unmap_unlock() and I think that
the bug is that we have to reset vmf->address and NULLify vmf->pte once we
are done with faultaround:

diff --git a/mm/memory.c b/mm/memory.c
index 829f5056dd1c..405f5c73ce3e 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3794,6 +3794,8 @@ vm_fault_t finish_fault(struct vm_fault *vmf)

update_mmu_tlb(vma, vmf->address, vmf->pte);
pte_unmap_unlock(vmf->pte, vmf->ptl);
+ vmf->address = address;
+ vmf->pte = NULL;
return ret;
}

--
Kirill A. Shutemov
\
 
 \ /
  Last update: 2020-12-28 23:59    [W:0.197 / U:0.584 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site