lkml.org 
[lkml]   [2012]   [Jul]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [PATCH v2 3/7] KVM: Add paravirt kvm_flush_tlb_others
Date
On Wed, 4 Jul 2012 23:09:10 -0300, Marcelo Tosatti <mtosatti@redhat.com> wrote:
> On Tue, Jul 03, 2012 at 01:49:49PM +0530, Nikunj A Dadhania wrote:
> > On Tue, 3 Jul 2012 04:55:35 -0300, Marcelo Tosatti <mtosatti@redhat.com> wrote:
> > > >
> > > > if (!zero_mask)
> > > > goto again;
> > >
> > > Can you please measure increased vmentry/vmexit overhead? x86/vmexit.c
> > > of git://git.kernel.org/pub/scm/virt/kvm/kvm-unit-tests.git should
> > > help.
> > >
> > Sure will get back with the result.
> >
> > > > + /*
> > > > + * Guest might have seen us offline and would have set
> > > > + * flush_on_enter.
> > > > + */
> > > > + kvm_read_guest_cached(vcpu->kvm, ghc, vs, 2*sizeof(__u32));
> > > > + if (vs->flush_on_enter)
> > > > + kvm_x86_ops->tlb_flush(vcpu);
> > >
> > >
> > > So flush_tlb_page which was an invlpg now flushes the entire TLB. Did
> > > you take that into account?
> > >
> > When the vcpu is sleeping/pre-empted out, multiple request for flush_tlb
> > could have happened. And now when we are here, it is cleaning up all the
> > TLB.
>
> Yes, cases where there are sufficient exits transforming one TLB entry
> invalidation into full TLB invalidation should go unnoticed.
>
> > One other approach would be to queue the addresses, that brings us with
> > the question: how many request to queue? This would require us adding
> > more syncronization between guest and host for updating the area where
> > these addresses is shared.
>
> Sounds unnecessarily complicated.
>
Yes, I did give this a try earlier, but did not see much improvement
with the amount of complexity that it was bringing in.

Regards
Nikunj



\
 
 \ /
  Last update: 2012-07-05 08:41    [W:0.184 / U:0.076 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site