lkml.org 
[lkml]   [2012]   [Jan]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [RFC PATCH 0/4] Gang scheduling in CFS
Date
On Wed, 04 Jan 2012 19:13:15 +0200, Avi Kivity <avi@redhat.com> wrote:
> On 01/04/2012 04:56 PM, Srivatsa Vaddagiri wrote:
> > * Avi Kivity <avi@redhat.com> [2012-01-04 16:41:58]:
> >
> > > > Here are some observation related to Baseline-only(8vm case)
> > > >
> > > > | ple_gap=128 | ple_gap=64 | ple_gap=256 | ple_window=2048
> > > > --------------+-------------+------------+-------------+----------------
> > > > EbzyRecords/s | 2247.50 | 2132.75 | 2086.25 | 1835.62
> > > > PauseExits | 7928154.00 | 6696342.00 | 7365999.00 | 50319582.00
> > > >
> > > > With ple_window = 2048, PauseExits is more than 6times the default case
> > >
> > > So it looks like the default is optimal, at least wrt the cases you
> > > tested and your test workload.
> >
> > The default case still lags considerably behind the results we are seeing with
> > gang scheduling. One more interesting data point would be to see how
> > many PLE exits we are seeing when the vcpu is spinning in
> > flush_tlb_others_ipi(). Is there any easy way to determine that?
> >
>
> You could get an exit trace (trace-cmd -e kvm:kvm_exit) and filter on
> PLE exits; the trace contains the guest %rip, so you could match it
> against flush_tlb_others_ipi().
>
Cool, this is much easier, had to do some awk script to extract PLE
exits wrt flush_tlb_others_ipi:

Matched 9382616(86%), Not matched 1453845(14%)

So considerable ple exits are from flush_tlb_others_ipi and even then we
see:

35.01% ebizzy [kernel.kallsyms] [k] flush_tlb_others_ipi

Nikunj



\
 
 \ /
  Last update: 2012-01-05 08:05    [W:0.780 / U:0.348 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site