lkml.org 
[lkml]   [2020]   [Apr]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH v2 3/4] arch/x86: Optionally flush L1D on context switch
Date
Balbir,

Balbir Singh <sblbir@amazon.com> writes:
> diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
> index 6f66d841262d..69e6ea20679c 100644
> --- a/arch/x86/include/asm/tlbflush.h
> +++ b/arch/x86/include/asm/tlbflush.h
> @@ -172,7 +172,7 @@ struct tlb_state {
> /* Last user mm for optimizing IBPB */
> union {
> struct mm_struct *last_user_mm;
> - unsigned long last_user_mm_ibpb;
> + unsigned long last_user_mm_spec;

> -static inline unsigned long mm_mangle_tif_spec_ib(struct task_struct *next)
> +static inline unsigned long mm_mangle_tif_spec_bits(struct task_struct *next)

> -static void cond_ibpb(struct task_struct *next)
> +static void cond_mitigation(struct task_struct *next)
> {
> + unsigned long prev_mm, next_mm;
> +
> if (!next || !next->mm)
> return;

can you please split out these preparatory changes into a separate
patch?

Thanks,

tglx

\
 
 \ /
  Last update: 2020-04-08 01:52    [W:1.100 / U:0.472 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site