lkml.org 
[lkml]   [2019]   [Feb]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC][Patch v8 0/7] KVM: Guest Free Page Hinting
On Mon, Feb 18, 2019 at 07:29:44PM +0100, David Hildenbrand wrote:
> >
> >>>
> >>> But really what business has something that is supposedly
> >>> an optimization blocking a VCPU? We are just freeing up
> >>> lots of memory why is it a good idea to slow that
> >>> process down?
> >>
> >> I first want to know that it is a problem before we declare it a
> >> problem. I provided an example (s390x) where it does not seem to be a
> >> problem. One hypercall ~every 512 frees. As simple as it can get.
> >>
> >> No trying to deny that it could be a problem on x86, but then I assume
> >> it is only a problem in specific setups.
> >
> > But which setups? How are we going to identify them?
>
> I guess is simple (I should be carefuly with this word ;) ): As long as
> you don't isolate + pin your CPUs in the hypervisor, you can expect any
> kind of sudden hickups. We're in a virtualized world. Real time is one
> example.
>
> Using kernel threads like Nitesh does right now? It can be scheduled
> anytime by the hypervisor on the exact same cpu. Unless you isolate +
> pin in the hypervor. So the same problem applies.

Right but we know how to handle this. Many deployments already use tools
to detect host threads kicking VCPUs out.
Getting VCPU blocked by a kfree call would be something new.


...

> > So I'm fine with a simple implementation but the interface needs to
> > allow the hypervisor to process hints in parallel while guest is
> > running. We can then fix any issues on hypervisor without breaking
> > guests.
>
> Yes, I am fine with defining an interface that theoretically let's us
> change the implementation in the guest later.
> I consider this even a
> prerequisite. IMHO the interface shouldn't be different, it will be
> exactly the same.
>
> It is just "who" calls the batch freeing and waits for it. And as I
> outlined here, doing it without additional threads at least avoids us
> for now having to think about dynamic data structures and that we can
> sometimes not report "because the thread is still busy reporting or
> wasn't scheduled yet".

Sorry I wasn't clear. I think we need ability to change the
implementation in the *host* later. IOW don't rely on
host being synchronous.


--
MST

\
 
 \ /
  Last update: 2019-02-18 20:17    [W:0.075 / U:0.508 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site