lkml.org 
[lkml]   [2009]   [Jan]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: lmbench lat_mmap slowdown with CONFIG_PARAVIRT

* Nick Piggin <npiggin@suse.de> wrote:

> On Tue, Jan 20, 2009 at 03:03:24PM +0100, Ingo Molnar wrote:
> >
> > * Ingo Molnar <mingo@elte.hu> wrote:
> >
> > > > Times I believe are in nanoseconds for lmbench, anyway lower is
> > > > better.
> > > >
> > > > non pv AVG=464.22 STD=5.56
> > > > paravirt AVG=502.87 STD=7.36
> > > >
> > > > Nearly 10% performance drop here, which is quite a bit... hopefully
> > > > people are testing the speed of their PV implementations against
> > > > non-PV bare metal :)
> > >
> > > Ouch, that looks unacceptably expensive. All the major distros turn
> > > CONFIG_PARAVIRT on. paravirt_ops was introduced in x86 with the express
> > > promise to have no measurable runtime overhead.
> >
> > Here are some more precise stats done via hw counters on a perfcounters
> > kernel using 'timec', running a modified version of the 'mmap performance
> > stress-test' app i made years ago.
> >
> > The MM benchmark app can be downloaded from:
> >
> > http://redhat.com/~mingo/misc/mmap-perf.c
>
> BTW. the lmbench test I run directly (it's called lat_mmap.c, and gets
> compiled into a standalone lat_mmap exec by the standard lmbench build).

doesnt that include an indeterminate number of gettimeofday() based
calibration calls? That would make it harder to measure its total costs in
a comparative way.

Ingo


\
 
 \ /
  Last update: 2009-01-20 15:21    [W:0.104 / U:0.104 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site