lkml.org 
[lkml]   [2017]   [Dec]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH V2] KVM/x86: Check input paging mode when cs.l is set
Date
Reported by syzkaller:
WARNING: CPU: 0 PID: 27962 at arch/x86/kvm/emulate.c:5631 x86_emulate_insn+0x557/0x15f0 [kvm]
Modules linked in: kvm_intel kvm [last unloaded: kvm]
CPU: 0 PID: 27962 Comm: syz-executor Tainted: G B W 4.15.0-rc2-next-20171208+ #32
Hardware name: Intel Corporation S1200SP/S1200SP, BIOS S1200SP.86B.01.03.0006.040720161253 04/07/2016
RIP: 0010:x86_emulate_insn+0x557/0x15f0 [kvm]
RSP: 0018:ffff8807234476d0 EFLAGS: 00010282
RAX: 0000000000000000 RBX: ffff88072d0237a0 RCX: ffffffffa0065c4d
RDX: 1ffff100e5a046f9 RSI: 0000000000000003 RDI: ffff88072d0237c8
RBP: ffff880723447728 R08: ffff88072d020000 R09: ffffffffa008d240
R10: 0000000000000002 R11: ffffed00e7d87db3 R12: ffff88072d0237c8
R13: ffff88072d023870 R14: ffff88072d0238c2 R15: ffffffffa008d080
FS: 00007f8a68666700(0000) GS:ffff880802200000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000000002009506c CR3: 000000071fec4005 CR4: 00000000003626f0
Call Trace:
x86_emulate_instruction+0x3bc/0xb70 [kvm]
? reexecute_instruction.part.162+0x130/0x130 [kvm]
vmx_handle_exit+0x46d/0x14f0 [kvm_intel]
? trace_event_raw_event_kvm_entry+0xe7/0x150 [kvm]
? handle_vmfunc+0x2f0/0x2f0 [kvm_intel]
? wait_lapic_expire+0x25/0x270 [kvm]
vcpu_enter_guest+0x720/0x1ef0 [kvm]
...

When CS.L is set, vcpu should run in the 64 bit paging mode.
Current kvm set_sregs function doesn't have such check when
userspace inputs sreg values. This will lead unexpected behavior.
This patch is to add checks for CS.L, EFER.LME, EFER.LMA and
CR4.PAE when get SREG inputs from userspace in order to avoid
unexpected behavior.

Suggested-by: Paolo Bonzini <pbonzini@redhat.com>
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Jim Mattson <jmattson@google.com>
Signed-off-by: Tianyu Lan <tianyu.lan@intel.com>
---
Change since v1:
Add check for EFER.LME, EFER.LMA and CR4.PAE
in order to make sure that if EFER.LME=1 and CR0.PG=1,
EFER.LMA and CR4.PAE must be 1.

---
arch/x86/kvm/x86.c | 27 +++++++++++++++++++++++++++
1 file changed, 27 insertions(+)

diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 1e1e617..8aca950 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -7485,6 +7485,30 @@ int kvm_task_switch(struct kvm_vcpu *vcpu, u16 tss_selector, int idt_index,
}
EXPORT_SYMBOL_GPL(kvm_task_switch);

+int kvm_valid_sregs(struct kvm_vcpu *vcpu, struct kvm_sregs *sregs)
+{
+ /*
+ * When EFER.LME and CR0.PG are set, CR4.PAE and EFER.LMA
+ * must be set.
+ */
+ if ((sregs->efer & EFER_LME) && (sregs->cr0 & X86_CR0_PG_BIT)) {
+ if (!(sregs->cr4 & X86_CR4_PAE_BIT))
+ return -EINVAL;
+ if (!(sregs->efer & EFER_LMA))
+ return -EINVAL;
+ } else if (sregs->efer & EFER_LMA)
+ return -EINVAL;
+
+ /* When CS.L is set, vcpu should run in 64-bit mode. */
+ if (sregs->cs.l)
+ if (!((sregs->cr0 & X86_CR0_PG_BIT) &&
+ (sregs->cr4 & X86_CR4_PAE_BIT) &&
+ (sregs->efer & EFER_LME)))
+ return -EINVAL;
+
+ return 0;
+}
+
int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu,
struct kvm_sregs *sregs)
{
@@ -7497,6 +7521,9 @@ int kvm_arch_vcpu_ioctl_set_sregs(struct kvm_vcpu *vcpu,
(sregs->cr4 & X86_CR4_OSXSAVE))
return -EINVAL;

+ if (kvm_valid_sregs(vcpu, sregs))
+ return -EINVAL;
+
apic_base_msr.data = sregs->apic_base;
apic_base_msr.host_initiated = true;
if (kvm_set_apic_base(vcpu, &apic_base_msr))
--
1.8.3.1
\
 
 \ /
  Last update: 2017-12-14 11:04    [W:0.046 / U:0.012 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site