lkml.org 
[lkml]   [2019]   [May]   [31]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[RFC PATCH v2 11/12] x86/mm/tlb: Use async and inline messages for flushing
Date
When we flush userspace mappings, we can defer the TLB flushes, as long
the following conditions are met:

1. No tables are freed, since otherwise speculative page walks might
cause machine-checks.

2. No one would access userspace before flush takes place. Specifically,
NMI handlers and kprobes would avoid accessing userspace.

Use the new SMP support to execute remote function calls with inlined
data for the matter. The function remote TLB flushing function would be
executed asynchronously and the local CPU would continue execution as
soon as the IPI was delivered, before the function was actually
executed. Since tlb_flush_info is copied, there is no risk it would
change before the TLB flush is actually executed.

Change nmi_uaccess_okay() to check whether a remote TLB flush is
currently in progress on this CPU by checking whether the asynchronously
called function is the remote TLB flushing function. The current
implementation disallows access in such cases, but it is also possible
to flush the entire TLB in such case and allow access.

When page-tables are freed or when kernel PTEs are removed, perform
synchronous TLB flushes. But still inline the data with the IPI data,
although the performance gains in this case are likely to be much
smaller.

Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Nadav Amit <namit@vmware.com>
---
arch/x86/include/asm/tlbflush.h | 12 ++++++++++++
arch/x86/mm/tlb.c | 15 +++++++++++----
2 files changed, 23 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
index a1fea36d5292..75e4c4263af6 100644
--- a/arch/x86/include/asm/tlbflush.h
+++ b/arch/x86/include/asm/tlbflush.h
@@ -245,6 +245,10 @@ struct tlb_state_shared {
};
DECLARE_PER_CPU_SHARED_ALIGNED(struct tlb_state_shared, cpu_tlbstate_shared);

+DECLARE_PER_CPU(smp_call_func_t, async_func_in_progress);
+
+extern void flush_tlb_func_remote(void *info);
+
/*
* Blindly accessing user memory from NMI context can be dangerous
* if we're in the middle of switching the current user task or
@@ -259,6 +263,14 @@ static inline bool nmi_uaccess_okay(void)

VM_WARN_ON_ONCE(!loaded_mm);

+ /*
+ * If we are in the middle of a TLB flush, access is not allowed. We
+ * could have just flushed the entire TLB and allow access, but it is
+ * easier and safer just to disallow access.
+ */
+ if (this_cpu_read(async_func_in_progress) == flush_tlb_func_remote)
+ return false;
+
/*
* The condition we want to check is
* current_mm->pgd == __va(read_cr3_pa()). This may be slow, though,
diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c
index 755b2bb3e5b6..fd7e90adbe43 100644
--- a/arch/x86/mm/tlb.c
+++ b/arch/x86/mm/tlb.c
@@ -644,7 +644,7 @@ static void flush_tlb_func_local(void *info)
flush_tlb_func_common(f, true, reason);
}

-static void flush_tlb_func_remote(void *info)
+void flush_tlb_func_remote(void *info)
{
const struct flush_tlb_info *f = info;

@@ -730,7 +730,7 @@ void native_flush_tlb_multi(const struct cpumask *cpumask,
*/
if (info->freed_tables)
__smp_call_function_many(cpumask, flush_tlb_func_remote,
- flush_tlb_func_local, (void *)info, 1);
+ flush_tlb_func_local, (void *)info, sizeof(*info), 1);
else {
/*
* Although we could have used on_each_cpu_cond_mask(),
@@ -753,8 +753,10 @@ void native_flush_tlb_multi(const struct cpumask *cpumask,
if (tlb_is_not_lazy(cpu))
__cpumask_set_cpu(cpu, cond_cpumask);
}
+
__smp_call_function_many(cond_cpumask, flush_tlb_func_remote,
- flush_tlb_func_local, (void *)info, 1);
+ flush_tlb_func_local, (void *)info,
+ sizeof(*info), 0);
}
}

@@ -915,7 +917,12 @@ void flush_tlb_kernel_range(unsigned long start, unsigned long end)

info = get_flush_tlb_info(NULL, start, end, 0, false, 0);

- on_each_cpu(do_kernel_range_flush, info, 1);
+ /*
+ * We have to wait for the remote shootdown to be done since it
+ * is kernel space.
+ */
+ __on_each_cpu_mask(cpu_online_mask, do_kernel_range_flush,
+ info, sizeof(*info), 1);

put_flush_tlb_info();
}
--
2.20.1
\
 
 \ /
  Last update: 2019-05-31 08:37    [W:0.155 / U:0.172 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site