lkml.org 
[lkml]   [2019]   [Nov]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH v3 04/19] x86/intel: Initialize IA32_FEATURE_CONTROL MSR at boot
Date
Opportunistically initialize IA32_FEATURE_CONTROL MSR to enable VMX when
the MSR is left unlocked by BIOS. Configuring IA32_FEATURE_CONTROL at
boot time paves the way for similar enabling of other features, e.g.
Software Guard Extensions (SGX).

Temporarily leave equivalent KVM code in place in order to avoid
introducing a regression on Centaur and Zhaoxin CPUs, e.g. removing
KVM's code would leave the MSR unlocked on those CPUs and would break
existing functionality if people are loading kvm_intel on Centaur and/or
Zhaoxin. Defer enablement of the boot-time configuration on Centaur and
Zhaoxin to future patches to aid bisection.

Note, Local Machine Check Exceptions (LMCE) are also supported by the
kernel and enabled via IA32_FEATURE_CONTROL, but the kernel currently
uses LMCE if and and only if the feature is explicitly enabled by BIOS.
Keep the current behavior to avoid introducing bugs, future patches can
opt in to opportunistic enabling if it's deemed desirable to do so.

Always lock IA32_FEATURE_CONTROL if it exists, even if the CPU doesn't
support VMX, so that other existing and future kernel code that queries
IA32_FEATURE_CONTROL can assume it's locked.

Start from a clean slate when constructing the value to write to
IA32_FEATURE_CONTROL, i.e. ignore whatever value BIOS left in the MSR so
as not to enable random features or fault on the WRMSR.

Suggested-by: Borislav Petkov <bp@suse.de>
Cc: Jarkko Sakkinen <jarkko.sakkinen@linux.intel.com>
Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
---
arch/x86/Kconfig.cpu | 4 +++
arch/x86/kernel/cpu/Makefile | 1 +
arch/x86/kernel/cpu/cpu.h | 4 +++
arch/x86/kernel/cpu/feature_control.c | 35 +++++++++++++++++++++++++++
arch/x86/kernel/cpu/intel.c | 2 ++
5 files changed, 46 insertions(+)
create mode 100644 arch/x86/kernel/cpu/feature_control.c

diff --git a/arch/x86/Kconfig.cpu b/arch/x86/Kconfig.cpu
index af9c967782f6..aafc14a0abf7 100644
--- a/arch/x86/Kconfig.cpu
+++ b/arch/x86/Kconfig.cpu
@@ -387,6 +387,10 @@ config X86_DEBUGCTLMSR
def_bool y
depends on !(MK6 || MWINCHIPC6 || MWINCHIP3D || MCYRIXIII || M586MMX || M586TSC || M586 || M486SX || M486) && !UML

+config X86_FEATURE_CONTROL_MSR
+ def_bool y
+ depends on CPU_SUP_INTEL
+
menuconfig PROCESSOR_SELECT
bool "Supported processor vendors" if EXPERT
---help---
diff --git a/arch/x86/kernel/cpu/Makefile b/arch/x86/kernel/cpu/Makefile
index 890f60083eca..84e35e762f76 100644
--- a/arch/x86/kernel/cpu/Makefile
+++ b/arch/x86/kernel/cpu/Makefile
@@ -29,6 +29,7 @@ obj-y += umwait.o
obj-$(CONFIG_PROC_FS) += proc.o
obj-$(CONFIG_X86_FEATURE_NAMES) += capflags.o powerflags.o

+obj-$(CONFIG_X86_FEATURE_CONTROL_MSR) += feature_control.o
ifdef CONFIG_CPU_SUP_INTEL
obj-y += intel.o intel_pconfig.o tsx.o
obj-$(CONFIG_PM) += intel_epb.o
diff --git a/arch/x86/kernel/cpu/cpu.h b/arch/x86/kernel/cpu/cpu.h
index 38ab6e115eac..a58e80866a3f 100644
--- a/arch/x86/kernel/cpu/cpu.h
+++ b/arch/x86/kernel/cpu/cpu.h
@@ -80,4 +80,8 @@ extern void x86_spec_ctrl_setup_ap(void);

extern u64 x86_read_arch_cap_msr(void);

+#ifdef CONFIG_X86_FEATURE_CONTROL_MSR
+void init_feature_control_msr(struct cpuinfo_x86 *c);
+#endif
+
#endif /* ARCH_X86_CPU_H */
diff --git a/arch/x86/kernel/cpu/feature_control.c b/arch/x86/kernel/cpu/feature_control.c
new file mode 100644
index 000000000000..33c9444dda52
--- /dev/null
+++ b/arch/x86/kernel/cpu/feature_control.c
@@ -0,0 +1,35 @@
+// SPDX-License-Identifier: GPL-2.0
+#include <linux/tboot.h>
+
+#include <asm/cpufeature.h>
+#include <asm/msr-index.h>
+#include <asm/processor.h>
+
+void init_feature_control_msr(struct cpuinfo_x86 *c)
+{
+ u64 msr;
+
+ if (rdmsrl_safe(MSR_IA32_FEATURE_CONTROL, &msr))
+ return;
+
+ if (msr & FEAT_CTL_LOCKED)
+ return;
+
+ /*
+ * Ignore whatever value BIOS left in the MSR to avoid enabling random
+ * features or faulting on the WRMSR.
+ */
+ msr = FEAT_CTL_LOCKED;
+
+ /*
+ * Enable VMX if and only if the kernel may do VMXON at some point,
+ * i.e. KVM is enabled, to avoid unnecessarily adding an attack vector
+ * for the kernel, e.g. using VMX to hide malicious code.
+ */
+ if (cpu_has(c, X86_FEATURE_VMX) && IS_ENABLED(CONFIG_KVM)) {
+ msr |= FEAT_CTL_VMX_ENABLED_OUTSIDE_SMX;
+ if (tboot_enabled())
+ msr |= FEAT_CTL_VMX_ENABLED_INSIDE_SMX;
+ }
+ wrmsrl(MSR_IA32_FEATURE_CONTROL, msr);
+}
diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
index 4a900804a023..b7c6ed0b40b6 100644
--- a/arch/x86/kernel/cpu/intel.c
+++ b/arch/x86/kernel/cpu/intel.c
@@ -755,6 +755,8 @@ static void init_intel(struct cpuinfo_x86 *c)
/* Work around errata */
srat_detect_node(c);

+ init_feature_control_msr(c);
+
if (cpu_has(c, X86_FEATURE_VMX))
detect_vmx_virtcap(c);

--
2.24.0
\
 
 \ /
  Last update: 2019-11-19 04:14    [W:0.285 / U:0.684 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site