lkml.org 
[lkml]   [2009]   [Jun]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectPerformance analysis under Linux (was: Re: [GIT PULL] Performance Counters for Linux)
* stephane eranian <eranian@googlemail.com> wrote:

[...]
> Those represent very advanced and very useful PMUs. Having
> implemented user and kernel support for both of them, I can attest
> that they challenge any interfaces. But perfmon is the proof that
> those can be exposed with their full strength thru a generic
> kernel API. Therefore, I am relatively hopeful, there should be a
> way to expose them through your API.

The thing is, in my opinion the main challenge is not in and was
never in exposing as many PMU features as possible.

The main challenge is in:

_also making it a useful solution to developers/users_

That is a key area where IMO perfcounters and perfmon differs. The
challenge of performance analysis is in:

1) Making the tool usage patterns as natural as possible. Make the
tools transparent and configuration-less by default.

2) Offer people the same rough set of common and robust features
regardless on what CPU (or even architecture) they are on.
Adding some CPU-specific feature (without generalizing it at
the same time) _never_ worked well enough.

3) Visualize the information in a rich way, making it work in as
many development workflows as possible.

tools/perf/ offers the seeds to that - it is a "full solution"
attempt at sane performance analysis tooling. It tries to be
'Oprofile done right' and 'pfmon done right'.

'perf' tries to keep the best aspects of oprofile (its user-tooling
work-flow in essence), based on a robust and tightly integrated
kernel side - and tries to expand the range and type of analysis
that can be done.

I do claim we had few if any sane performance analysis tools before
under Linux, and i think we are still in the stone ages and still
have a lot of work to do in this area.

As a sidenote, IMO Linux has become somewhat vulnerable to creeping
featurities in the past few years partly because we simply dont have
good enough tools that can _prove_ it in an easy way that a patch is
having bad effects to performance.

I see many kernel developers using oprofile only as a last-ditch
option - and that's a pity - running a profiler and interpreting its
results should be as easy as editing a file or committing a change.

We've only scratched the surface really, and the main road ahead us
is IMO not just in terms of PMU hw feature support depth (which is
relatively straightforward), but in terms of walking the full
distance and bringing it all to developers and putting it on their
desk.

So for every "will you support advanced PMU feature X, Y and Z"
question you ask, the first-level answer is: 'please show the
developer usecase and integrate it into our tools so we can see how
it all works and how useful it is'.

"A tool might want to do this" is not a good enough answer. We now
have a working OSS tool-space with 'perf' where such arguments for
more PMU features can be made in very specific terms: patches,
numbers and comparisons. Actual hands-on utility, happy developers
and faster apps is what matters in the end - not just the list of
PMU features we expose.

Ingo


\
 
 \ /
  Last update: 2009-06-22 15:13    [W:0.103 / U:0.156 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site