lkml.org 
[lkml]   [2020]   [May]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC 06/16] KVM: Use GUP instead of copy_from/to_user() to access guest memory
On Fri, May 22, 2020 at 03:52:04PM +0300, Kirill A. Shutemov wrote:
> +int copy_from_guest(void *data, unsigned long hva, int len)
> +{
> + int offset = offset_in_page(hva);
> + struct page *page;
> + int npages, seg;
> +
> + while ((seg = next_segment(len, offset)) != 0) {
> + npages = get_user_pages_unlocked(hva, 1, &page, 0);
> + if (npages != 1)
> + return -EFAULT;
> + memcpy(data, page_address(page) + offset, seg);
> + put_page(page);
> + len -= seg;
> + hva += seg;
> + offset = 0;
> + }
> +
> + return 0;
> +}
> +
> +int copy_to_guest(unsigned long hva, const void *data, int len)
> +{
> + int offset = offset_in_page(hva);
> + struct page *page;
> + int npages, seg;
> +
> + while ((seg = next_segment(len, offset)) != 0) {
> + npages = get_user_pages_unlocked(hva, 1, &page, FOLL_WRITE);
> + if (npages != 1)
> + return -EFAULT;
> + memcpy(page_address(page) + offset, data, seg);
> + put_page(page);
> + len -= seg;
> + hva += seg;
> + offset = 0;
> + }
> + return 0;
> +}
> +
> static int __kvm_read_guest_page(struct kvm_memory_slot *slot, gfn_t gfn,
> - void *data, int offset, int len)
> + void *data, int offset, int len,
> + bool protected)
> {
> int r;
> unsigned long addr;
> @@ -2257,7 +2297,10 @@ static int __kvm_read_guest_page(struct kvm_memory_slot *slot, gfn_t gfn,
> addr = gfn_to_hva_memslot_prot(slot, gfn, NULL);
> if (kvm_is_error_hva(addr))
> return -EFAULT;
> - r = __copy_from_user(data, (void __user *)addr + offset, len);
> + if (protected)
> + r = copy_from_guest(data, addr + offset, len);
> + else
> + r = __copy_from_user(data, (void __user *)addr + offset, len);
> if (r)
> return -EFAULT;
> return 0;

This ends up removing KASAN and object size tests. Compare to:

__copy_from_user(void *to, const void __user *from, unsigned long n)
{
might_fault();
kasan_check_write(to, n);
check_object_size(to, n, false);
return raw_copy_from_user(to, from, n);
}

Those will need to get added back. :)

Additionally, I see that copy_from_guest() neither clears the
destination memory on a short read, nor does KVM actually handle the
short read case correctly now. See the notes in uaccess.h:

* NOTE: only copy_from_user() zero-pads the destination in case of short copy.
* Neither __copy_from_user() nor __copy_from_user_inatomic() zero anything
* at all; their callers absolutely must check the return value.

It's not clear to me how the destination buffers get reused, but the has
the potential to leak kernel memory contents. This needs separate
fixing, I think.

-Kees

--
Kees Cook

\
 
 \ /
  Last update: 2020-05-29 17:25    [W:0.339 / U:0.356 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site