lkml.org 
[lkml]   [2015]   [Feb]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH] vhost: support upto 509 memory regions
From
On Tue, Feb 17, 2015 at 4:32 AM, Michael S. Tsirkin <mst@redhat.com> wrote:
> On Tue, Feb 17, 2015 at 11:59:48AM +0100, Paolo Bonzini wrote:
>>
>>
>> On 17/02/2015 10:02, Michael S. Tsirkin wrote:
>> > > Increasing VHOST_MEMORY_MAX_NREGIONS from 65 to 509
>> > > to match KVM_USER_MEM_SLOTS fixes issue for vhost-net.
>> > >
>> > > Signed-off-by: Igor Mammedov <imammedo@redhat.com>
>> >
>> > This scares me a bit: each region is 32byte, we are talking
>> > a 16K allocation that userspace can trigger.
>>
>> What's bad with a 16K allocation?
>
> It fails when memory is fragmented.
>
>> > How does kvm handle this issue?
>>
>> It doesn't.
>>
>> Paolo
>
> I'm guessing kvm doesn't do memory scans on data path,
> vhost does.
>
> qemu is just doing things that kernel didn't expect it to need.
>
> Instead, I suggest reducing number of GPA<->HVA mappings:
>
> you have GPA 1,5,7
> map them at HVA 11,15,17
> then you can have 1 slot: 1->11
>
> To avoid libc reusing the memory holes, reserve them with MAP_NORESERVE
> or something like this.

This works beautifully when host virtual address bits are more
plentiful than guest physical address bits. Not all architectures
have that property, though.

> We can discuss smarter lookup algorithms but I'd rather
> userspace didn't do things that we then have to
> work around in kernel.
>
>
> --
> MST
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html


\
 
 \ /
  Last update: 2015-02-18 02:01    [W:2.292 / U:0.008 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site