lkml.org 
[lkml]   [2019]   [Mar]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 5.0 229/238] KVM: nVMX: Sign extend displacements of VMX instrs mem operands
    Date
    5.0-stable review patch.  If anyone has any objections, please let me know.

    ------------------

    From: Sean Christopherson <sean.j.christopherson@intel.com>

    commit 946c522b603f281195af1df91837a1d4d1eb3bc9 upstream.

    The VMCS.EXIT_QUALIFCATION field reports the displacements of memory
    operands for various instructions, including VMX instructions, as a
    naturally sized unsigned value, but masks the value by the addr size,
    e.g. given a ModRM encoded as -0x28(%ebp), the -0x28 displacement is
    reported as 0xffffffd8 for a 32-bit address size. Despite some weird
    wording regarding sign extension, the SDM explicitly states that bits
    beyond the instructions address size are undefined:

    In all cases, bits of this field beyond the instruction’s address
    size are undefined.

    Failure to sign extend the displacement results in KVM incorrectly
    treating a negative displacement as a large positive displacement when
    the address size of the VMX instruction is smaller than KVM's native
    size, e.g. a 32-bit address size on a 64-bit KVM.

    The very original decoding, added by commit 064aea774768 ("KVM: nVMX:
    Decoding memory operands of VMX instructions"), sort of modeled sign
    extension by truncating the final virtual/linear address for a 32-bit
    address size. I.e. it messed up the effective address but made it work
    by adjusting the final address.

    When segmentation checks were added, the truncation logic was kept
    as-is and no sign extension logic was introduced. In other words, it
    kept calculating the wrong effective address while mostly generating
    the correct virtual/linear address. As the effective address is what's
    used in the segment limit checks, this results in KVM incorreclty
    injecting #GP/#SS faults due to non-existent segment violations when
    a nested VMM uses negative displacements with an address size smaller
    than KVM's native address size.

    Using the -0x28(%ebp) example, an EBP value of 0x1000 will result in
    KVM using 0x100000fd8 as the effective address when checking for a
    segment limit violation. This causes a 100% failure rate when running
    a 32-bit KVM build as L1 on top of a 64-bit KVM L0.

    Fixes: f9eb4af67c9d ("KVM: nVMX: VMX instructions: add checks for #GP/#SS exceptions")
    Cc: stable@vger.kernel.org
    Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com>
    Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

    ---
    arch/x86/kvm/vmx/nested.c | 4 ++++
    1 file changed, 4 insertions(+)

    --- a/arch/x86/kvm/vmx/nested.c
    +++ b/arch/x86/kvm/vmx/nested.c
    @@ -4035,6 +4035,10 @@ int get_vmx_mem_address(struct kvm_vcpu
    /* Addr = segment_base + offset */
    /* offset = base + [index * scale] + displacement */
    off = exit_qualification; /* holds the displacement */
    + if (addr_size == 1)
    + off = (gva_t)sign_extend64(off, 31);
    + else if (addr_size == 0)
    + off = (gva_t)sign_extend64(off, 15);
    if (base_is_valid)
    off += kvm_register_read(vcpu, base_reg);
    if (index_is_valid)

    \
     
     \ /
      Last update: 2019-03-22 13:26    [W:4.121 / U:0.800 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site