lkml.org 
[lkml]   [2015]   [Apr]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 5/8] x86: Make old K8 swapgs workaround conditional
On Fri, Apr 10, 2015 at 08:50:30AM -0700, Andi Kleen wrote:
> From: Andi Kleen <ak@linux.intel.com>
>
> Every gs selector/index reload always paid an extra MFENCE
> between the two SWAPGS. This was to work around an old
> bug in early K8 steppings. All other CPUs don't need the extra
> mfence. Patch the extra MFENCE only in for K8.
>
> Signed-off-by: Andi Kleen <ak@linux.intel.com>
> ---
> arch/x86/include/asm/cpufeature.h | 1 +
> arch/x86/kernel/cpu/amd.c | 3 +++
> arch/x86/kernel/entry_64.S | 10 +++++++++-
> 3 files changed, 13 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/include/asm/cpufeature.h b/arch/x86/include/asm/cpufeature.h
> index 90a5485..c695fad 100644
> --- a/arch/x86/include/asm/cpufeature.h
> +++ b/arch/x86/include/asm/cpufeature.h
> @@ -255,6 +255,7 @@
> #define X86_BUG_11AP X86_BUG(5) /* Bad local APIC aka 11AP */
> #define X86_BUG_FXSAVE_LEAK X86_BUG(6) /* FXSAVE leaks FOP/FIP/FOP */
> #define X86_BUG_CLFLUSH_MONITOR X86_BUG(7) /* AAI65, CLFLUSH required before MONITOR */
> +#define X86_BUG_SWAPGS_MFENCE X86_BUG(8) /* SWAPGS may need MFENCE */
>
> #if defined(__KERNEL__) && !defined(__ASSEMBLY__)
>
> diff --git a/arch/x86/kernel/cpu/amd.c b/arch/x86/kernel/cpu/amd.c
> index a220239..e7f5667 100644
> --- a/arch/x86/kernel/cpu/amd.c
> +++ b/arch/x86/kernel/cpu/amd.c
> @@ -551,6 +551,9 @@ static void init_amd_k8(struct cpuinfo_x86 *c)
> if ((level >= 0x0f48 && level < 0x0f50) || level >= 0x0f58)
> set_cpu_cap(c, X86_FEATURE_REP_GOOD);
>
> + /* Early steppings needed a mfence on swapgs. */
> + set_cpu_cap(c, X86_BUG_SWAPGS_MFENCE);

set_cpu_bug()

and this should not be set on all K8 but for the early steppings only
which need it.

> +
> /*
> * Some BIOSes incorrectly force this feature, but only K8 revision D
> * (model = 0x14) and later actually support it.
> diff --git a/arch/x86/kernel/entry_64.S b/arch/x86/kernel/entry_64.S
> index 0b74ab0..bb44292 100644
> --- a/arch/x86/kernel/entry_64.S
> +++ b/arch/x86/kernel/entry_64.S
> @@ -1212,13 +1212,21 @@ ENTRY(native_load_gs_index)
> SWAPGS
> gs_change:
> movl %edi,%gs
> -2: mfence /* workaround */
> +2: ASM_NOP3 /* may be replaced with mfence */
> SWAPGS
> popfq_cfi
> ret
> CFI_ENDPROC
> END(native_load_gs_index)
>
> + /* Early K8 systems needed an mfence after swapgs to workaround a bug */
> + .section .altinstr_replacement,"ax"
> +3: mfence
> + .previous
> + .section .altinstructions,"a"
> + altinstruction_entry 2b,3b,X86_BUG_SWAPGS_MFENCE,3,3
> + .previous
> +

What AndyL said.

--
Regards/Gruss,
Boris.

ECO tip #101: Trim your mails when you reply.
--


\
 
 \ /
  Last update: 2015-04-11 00:41    [W:0.535 / U:2.376 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site