lkml.org 
[lkml]   [2018]   [Feb]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH v7 04/24] mm: Dont assume page-table invariance during faults
Date
From: Peter Zijlstra <peterz@infradead.org>

One of the side effects of speculating on faults (without holding
mmap_sem) is that we can race with free_pgtables() and therefore we
cannot assume the page-tables will stick around.

Remove the reliance on the pte pointer.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>

In most of the case pte_unmap_same() was returning 1, which meaning that
do_swap_page() should do its processing. So in most of the case there will
be no impact.

Now regarding the case where pte_unmap_safe() was returning 0, and thus
do_swap_page return 0 too, this happens when the page has already been
swapped back. This may happen before do_swap_page() get called or while in
the call to do_swap_page(). In that later case, the check done when
swapin_readahead() returns will detect that case.

The worst case would be that a page fault is occuring on 2 threads at the
same time on the same swapped out page. In that case one thread will take
much time looping in __read_swap_cache_async(). But in the regular page
fault path, this is even worse since the thread would wait for semaphore to
be released before starting anything.

[Remove only if !CONFIG_SPECULATIVE_PAGE_FAULT]
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
mm/memory.c | 4 ++++
1 file changed, 4 insertions(+)

diff --git a/mm/memory.c b/mm/memory.c
index 5ec6433d6a5c..32b9eb77d95c 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2288,6 +2288,7 @@ int apply_to_page_range(struct mm_struct *mm, unsigned long addr,
}
EXPORT_SYMBOL_GPL(apply_to_page_range);

+#ifndef CONFIG_SPECULATIVE_PAGE_FAULT
/*
* handle_pte_fault chooses page fault handler according to an entry which was
* read non-atomically. Before making any commitment, on those architectures
@@ -2311,6 +2312,7 @@ static inline int pte_unmap_same(struct mm_struct *mm, pmd_t *pmd,
pte_unmap(page_table);
return same;
}
+#endif /* CONFIG_SPECULATIVE_PAGE_FAULT */

static inline void cow_user_page(struct page *dst, struct page *src, unsigned long va, struct vm_area_struct *vma)
{
@@ -2898,11 +2900,13 @@ int do_swap_page(struct vm_fault *vmf)
swapcache = page;
}

+#ifndef CONFIG_SPECULATIVE_PAGE_FAULT
if (!pte_unmap_same(vma->vm_mm, vmf->pmd, vmf->pte, vmf->orig_pte)) {
if (page)
put_page(page);
goto out;
}
+#endif

entry = pte_to_swp_entry(vmf->orig_pte);
if (unlikely(non_swap_entry(entry))) {
--
2.7.4
\
 
 \ /
  Last update: 2018-02-06 17:52    [W:1.981 / U:0.056 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site