lkml.org 
[lkml]   [2009]   [Dec]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH] perf_events: improve Intel event scheduling
From
On Mon, Dec 21, 2009 at 8:31 PM, Peter Zijlstra <peterz@infradead.org> wrote:
> On Mon, 2009-12-21 at 20:00 +0100, Stephane Eranian wrote:
>
>> >  perf_disable() <-- shut down the full pmu
>> >
>> >  pmu->disable() <-- hey someone got removed (easy free the reg)
>> >  pmu->enable()  <-- hey someone got added (harder, check constraints)
>> >
>> >  hw_perf_group_sched_in() <-- hey a full group got added
>> >                              (better than multiple ->enable)
>> >
>> >  perf_enable() <-- re-enable pmu
>> >
>> >
>> > So ->disable() is used to track freeing, ->enable is used to add
>> > individual counters, check constraints etc..
>> >
>> > hw_perf_group_sched_in() is used to optimize the full group enable.
>> >
>>
>> Does that mean that after a disable() I can assume that there won't
>> be an enable() without a group_sched_in()?
>>
>> I suspect not. In fact, there is a counter-example in perf_ctx_adjust_freq().
>
> Why would that be a problem?
>
> A perf_disable() will simply disable the pmu, but not alter its
> configuration, perf_enable() will program the pmu and re-enable it (with
> the obvious optimization that if the current state and the new state
> match we can skip the actual programming).
>
> If a ->disable() was observed between it will simply not re-enable that
> particular counter, that is it will remove that counters' config, and
> perf_enable() can do a smart reprogram by simply no re-enabling that
> counter.
>
> If an ->enable() or hw_perf_group_sched_in() was observed in between,
> you have to recompute the full state in order to validate the
> availability, if that fails, no state change, if it succeeds you have a
> new pmu state and perf_enable() will program that.
>

Ok, so what you are suggesting is that the assignment is actually done
incrementally in ->enable(). hw_group_sched_in() would simply validate
that a single group is sane (i.e., can be scheduled if it was alone).

> In the case of perf_ctx_adjust_freq():
>
>        if (!interrupts) {
>                perf_disable();
>                event->pmu->disable(event);
>                atomic64_set(&hwc->period_left, 0);
>                event->pmu->enable(event);
>                perf_enable();
>        }
>
> You'll very likely end up with the same state you had before, but if not
> who cares -- its supposed to be an unlikely case.
>
> That is, the ->disable() will clear the cpu_hw_events::used_mask bit.
> The ->enable() will want to place the counter on its last know reg,
> which (not so very surprisingly) will be available, hence it will

Not in the case I am looking at on AMD. The counter you need to grab
depends on what is going on on the other cores on the socket. So
there is no guarantee that you will get the same. Something similar
exists on Nehalem but it has to do with HT.

> trivially succeed, depending on its smarts it might or might not find
> that the 'new' counter's config is identical to the current hw state, if
> not it might set a cpu_hw_event::reprogram state.
>
> perf_enable() will, when it sees cpu_hw_event::reprogram, re-write the
> pmu config and re-enable the whole thing (global ctrl reg, or iterate
> individual EN bits).
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2009-12-21 22:03    [W:0.044 / U:0.076 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site