lkml.org 
[lkml]   [2017]   [Oct]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH v4 19/20] x86/mm: Add speculative pagefault handling
Date
From: Peter Zijlstra <peterz@infradead.org>

Try a speculative fault before acquiring mmap_sem, if it returns with
VM_FAULT_RETRY continue with the mmap_sem acquisition and do the
traditional fault.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>

[Clearing of FAULT_FLAG_ALLOW_RETRY is now done in
handle_speculative_fault()]
[Retry with usual fault path in the case VM_ERROR is returned by
handle_speculative_fault(). This allows signal to be delivered]
[Don't build SPF call if !__HAVE_ARCH_CALL_SPF]
Signed-off-by: Laurent Dufour <ldufour@linux.vnet.ibm.com>
---
arch/x86/include/asm/pgtable_types.h | 7 +++++++
arch/x86/mm/fault.c | 21 +++++++++++++++++++++
2 files changed, 28 insertions(+)

diff --git a/arch/x86/include/asm/pgtable_types.h b/arch/x86/include/asm/pgtable_types.h
index f1492473f10e..48c7e587eac4 100644
--- a/arch/x86/include/asm/pgtable_types.h
+++ b/arch/x86/include/asm/pgtable_types.h
@@ -257,6 +257,13 @@ enum page_cache_mode {
#define PGD_IDENT_ATTR 0x001 /* PRESENT (no other attributes) */
#endif

+/*
+ * Advertise that we call the Speculative Page Fault handler.
+ */
+#if defined(CONFIG_X86_64) && defined(CONFIG_SMP)
+#define __HAVE_ARCH_CALL_SPF
+#endif
+
#ifdef CONFIG_X86_32
# include <asm/pgtable_32_types.h>
#else
diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
index e2baeaa053a5..802dc7d914a2 100644
--- a/arch/x86/mm/fault.c
+++ b/arch/x86/mm/fault.c
@@ -1364,6 +1364,24 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code,
if (error_code & PF_INSTR)
flags |= FAULT_FLAG_INSTRUCTION;

+#ifdef __HAVE_ARCH_CALL_SPF
+ if (error_code & PF_USER) {
+ fault = handle_speculative_fault(mm, address, flags);
+
+ /*
+ * We also check against VM_FAULT_ERROR because we have to
+ * raise a signal by calling later mm_fault_error() which
+ * requires the vma pointer to be set. So in that case,
+ * we fall through the normal path.
+ */
+ if (!(fault & VM_FAULT_RETRY || fault & VM_FAULT_ERROR)) {
+ perf_sw_event(PERF_COUNT_SW_SPF, 1,
+ regs, address);
+ goto done;
+ }
+ }
+#endif /* __HAVE_ARCH_CALL_SPF */
+
/*
* When running in the kernel we expect faults to occur only to
* addresses in user space. All other faults represent errors in
@@ -1474,6 +1492,9 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code,
return;
}

+#ifdef __HAVE_ARCH_CALL_SPF
+done:
+#endif
/*
* Major/minor page fault accounting. If any of the events
* returned VM_FAULT_MAJOR, we account it as a major fault.
--
2.7.4
\
 
 \ /
  Last update: 2017-10-09 12:10    [W:0.226 / U:0.028 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site