lkml.org 
[lkml]   [2017]   [Jan]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: kvm: use-after-free in complete_emulated_mmio
On Fri, Jan 6, 2017 at 10:59 AM, Wanpeng Li <kernellwp@gmail.com> wrote:
> 2016-12-27 21:57 GMT+08:00 Dmitry Vyukov <dvyukov@google.com>:
>> Hello,
>>
>> The following program triggers use-after-free in complete_emulated_mmio:
>> https://gist.githubusercontent.com/dvyukov/79c7ee10f568b0d5c33788534bb6edc9/raw/2c2d4ce0fe86398ed81e65281e8c215c7c3632fb/gistfile1.txt
>>
>> BUG: KASAN: use-after-free in complete_emulated_mmio+0x8dd/0xb70
>> arch/x86/kvm/x86.c:7052 at addr ffff880069f1ed48
>> Read of size 8 by task syz-executor/31542
>> CPU: 3 PID: 31542 Comm: syz-executor Not tainted 4.9.0+ #105
>> Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
>> Call Trace:
>> check_memory_region+0x139/0x190 mm/kasan/kasan.c:322
>> memcpy+0x23/0x50 mm/kasan/kasan.c:357
>> complete_emulated_mmio+0x8dd/0xb70 arch/x86/kvm/x86.c:7052
>> kvm_arch_vcpu_ioctl_run+0x308d/0x45f0 arch/x86/kvm/x86.c:7090
>> kvm_vcpu_ioctl+0x673/0x1120 arch/x86/kvm/../../../virt/kvm/kvm_main.c:2569
>> vfs_ioctl fs/ioctl.c:43 [inline]
>> do_vfs_ioctl+0x1bf/0x1780 fs/ioctl.c:683
>> SYSC_ioctl fs/ioctl.c:698 [inline]
>> SyS_ioctl+0x8f/0xc0 fs/ioctl.c:689
>> entry_SYSCALL_64_fastpath+0x1f/0xc2
>> RIP: 0033:0x4421e9
>> RSP: 002b:00007f320dc67b58 EFLAGS: 00000286 ORIG_RAX: 0000000000000010
>> RAX: ffffffffffffffda RBX: 0000000000000018 RCX: 00000000004421e9
>> RDX: 0000000000000000 RSI: 000000000000ae80 RDI: 0000000000000018
>> RBP: 00000000006dbb20 R08: 0000000000000000 R09: 0000000000000000
>> R10: 0000000000000000 R11: 0000000000000286 R12: 0000000000700000
>> R13: 00007f320de671c8 R14: 00007f320de69000 R15: 0000000000000000
>> Object at ffff880069f183c0, in cache kmalloc-16384 size: 16384
>> Allocated:
>> PID = 31567
>> [<ffffffff8123eb36>] save_stack_trace+0x16/0x20 arch/x86/kernel/stacktrace.c:57
>> [<ffffffff81943353>] save_stack+0x43/0xd0 mm/kasan/kasan.c:502
>> [<ffffffff8194361a>] set_track mm/kasan/kasan.c:514 [inline]
>> [<ffffffff8194361a>] kasan_kmalloc+0xaa/0xd0 mm/kasan/kasan.c:605
>> [<ffffffff8193fa7c>] kmem_cache_alloc_trace+0xec/0x640 mm/slab.c:3629
>> [<ffffffff810724ce>] kvm_arch_alloc_vm include/linux/slab.h:490 [inline]
>> [<ffffffff810724ce>] kvm_create_vm
>> arch/x86/kvm/../../../virt/kvm/kvm_main.c:613 [inline]
>> [<ffffffff810724ce>] kvm_dev_ioctl_create_vm
>> arch/x86/kvm/../../../virt/kvm/kvm_main.c:3174 [inline]
>> [<ffffffff810724ce>] kvm_dev_ioctl+0x1be/0x11b0
>> arch/x86/kvm/../../../virt/kvm/kvm_main.c:3218
>> [<ffffffff819af76f>] vfs_ioctl fs/ioctl.c:43 [inline]
>> [<ffffffff819af76f>] do_vfs_ioctl+0x1bf/0x1780 fs/ioctl.c:683
>> [<ffffffff819b0dbf>] SYSC_ioctl fs/ioctl.c:698 [inline]
>> [<ffffffff819b0dbf>] SyS_ioctl+0x8f/0xc0 fs/ioctl.c:689
>> [<ffffffff83fd1d81>] entry_SYSCALL_64_fastpath+0x1f/0xc2
>> Memory state around the buggy address:
>> ffff880069f1ec00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
>> ffff880069f1ec80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
>>>ffff880069f1ed00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
>> ^
>> ffff880069f1ed80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
>> ffff880069f1ee00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
>> ==================================================================
>>
>>
>> On commit e93b1cc8a8965da137ffea0b88e5f62fa1d2a9e6 (Dec 19).
>>
>>
>> I've also printed some values when the bug happens:
>>
>> pr_err("vcpu=%p, mmio_fragments=%p frag=%p frag=%d/%d len=%d gpa=%p write=%d\n",
>> vcpu, vcpu->mmio_fragments, frag, vcpu->mmio_cur_fragment,
>> vcpu->mmio_nr_fragments, frag->len, (void*)frag->gpa,
>> vcpu->mmio_is_write);
>>
>> [ 26.765898] vcpu=ffff880068590100, mmio_fragments=ffff880068590338
>> frag=ffff880068590338 frag=0/1 len=152 gpa=0000000000001008 write=1
>
>
> test-2892 [006] .... 118.284172: complete_emulated_mmio: vcpu =
> ffff9beefb288000, mmio_fragments = ffff9beefb2881b0, frag =
> ffff9beefb2881b0, frag = 0/1, len = 160, gpa = 0000000000001000, write
> = 1
> test-2897 [003] .... 118.284196: complete_emulated_mmio: vcpu =
> ffff9beef69a0000, mmio_fragments = ffff9beef69a01b0, frag =
> ffff9beef69a01b0, frag = 0/1, len = 160, gpa = 0000000000001000, write
> = 1
>
> Actually the mmio will be splitted to 8 byte piece and returns to
> qemu(if it's not emulated by kvm) to be emulated one by one, however,
> we can observe that there is no subsequent handle to the left pieces,
> I guess the VM is almost destroyed immediately in the testcase, right?


I am not sure I fully understand your question.
First, there is only one KVM_RUN in the test, so if multiple pieces
require multiple KVM_RUNs, then we will not see that happening.
Also, I set panic_on_warn, so after the first KASAN report kernel panics.

\
 
 \ /
  Last update: 2017-01-06 11:29    [W:0.183 / U:0.188 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site