lkml.org 
[lkml]   [2022]   [May]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
SubjectRe: [PATCH V2 2/4] perf stat: Always keep perf metrics topdown events in a group
From


On 5/16/2022 11:11 PM, Ian Rogers wrote:
> On Mon, May 16, 2022 at 8:25 AM <kan.liang@linux.intel.com> wrote:
>>
>> From: Kan Liang <kan.liang@linux.intel.com>
>>
>> If any member in a group has a different cpu mask than the other
>> members, the current perf stat disables group. when the perf metrics
>> topdown events are part of the group, the below <not supported> error
>> will be triggered.
>>
>> $ perf stat -e "{slots,topdown-retiring,uncore_imc_free_running_0/dclk/}" -a sleep 1
>> WARNING: grouped events cpus do not match, disabling group:
>> anon group { slots, topdown-retiring, uncore_imc_free_running_0/dclk/ }
>>
>> Performance counter stats for 'system wide':
>>
>> 141,465,174 slots
>> <not supported> topdown-retiring
>> 1,605,330,334 uncore_imc_free_running_0/dclk/
>>
>> The perf metrics topdown events must always be grouped with a slots
>> event as leader.
>>
>> Factor out evsel__remove_from_group() to only remove the regular events
>> from the group.
>>
>> Remove evsel__must_be_in_group(), since no one use it anymore.
>>
>> With the patch, the topdown events aren't broken from the group for the
>> splitting.
>>
>> $ perf stat -e "{slots,topdown-retiring,uncore_imc_free_running_0/dclk/}" -a sleep 1
>> WARNING: grouped events cpus do not match, disabling group:
>> anon group { slots, topdown-retiring, uncore_imc_free_running_0/dclk/ }
>>
>> Performance counter stats for 'system wide':
>>
>> 346,110,588 slots
>> 124,608,256 topdown-retiring
>> 1,606,869,976 uncore_imc_free_running_0/dclk/
>>
>> 1.003877592 seconds time elapsed
>>
>> Fixes: a9a1790247bd ("perf stat: Ensure group is defined on top of the same cpu mask")
>> Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
>> ---
>> tools/perf/builtin-stat.c | 7 +++----
>> tools/perf/util/evlist.c | 6 +-----
>> tools/perf/util/evsel.c | 13 +++++++++++--
>> tools/perf/util/evsel.h | 2 +-
>> 4 files changed, 16 insertions(+), 12 deletions(-)
>>
>> diff --git a/tools/perf/builtin-stat.c b/tools/perf/builtin-stat.c
>> index a96f106dc93a..75c88c7939b1 100644
>> --- a/tools/perf/builtin-stat.c
>> +++ b/tools/perf/builtin-stat.c
>> @@ -271,10 +271,9 @@ static void evlist__check_cpu_maps(struct evlist *evlist)
>> pr_warning(" %s: %s\n", evsel->name, buf);
>> }
>>
>> - for_each_group_evsel(pos, leader) {
>> - evsel__set_leader(pos, pos);
>> - pos->core.nr_members = 0;
>> - }
>> + for_each_group_evsel(pos, leader)
>> + evsel__remove_from_group(pos, leader);
>> +
>> evsel->core.leader->nr_members = 0;
>
> This shouldn't be necessary now.

It should point to itself which has been updated in
evsel__remove_from_group().

I will remove it in V3.

>
>> }
>> }
>> diff --git a/tools/perf/util/evlist.c b/tools/perf/util/evlist.c
>> index dfa65a383502..7fc544330fea 100644
>> --- a/tools/perf/util/evlist.c
>> +++ b/tools/perf/util/evlist.c
>> @@ -1795,11 +1795,7 @@ struct evsel *evlist__reset_weak_group(struct evlist *evsel_list, struct evsel *
>> * them. Some events, like Intel topdown, require being
>> * in a group and so keep these in the group.
>> */
>> - if (!evsel__must_be_in_group(c2) && c2 != leader) {
>> - evsel__set_leader(c2, c2);
>> - c2->core.nr_members = 0;
>> - leader->core.nr_members--;
>> - }
>> + evsel__remove_from_group(c2, leader);
>>
>> /*
>> * Set this for all former members of the group
>> diff --git a/tools/perf/util/evsel.c b/tools/perf/util/evsel.c
>> index b98882cbb286..deb428ee5e50 100644
>> --- a/tools/perf/util/evsel.c
>> +++ b/tools/perf/util/evsel.c
>> @@ -3083,7 +3083,16 @@ bool __weak arch_evsel__must_be_in_group(const struct evsel *evsel __maybe_unuse
>> return false;
>> }
>>
>> -bool evsel__must_be_in_group(const struct evsel *evsel)
>> +/*
>> + * Remove an event from a given group (leader).
>> + * Some events, e.g., perf metrics Topdown events,
>> + * must always be grouped. Ignore the events.
>> + */
>> +void evsel__remove_from_group(struct evsel *evsel, struct evsel *leader)
>> {
>> - return arch_evsel__must_be_in_group(evsel);
>> + if (!arch_evsel__must_be_in_group(evsel) && evsel != leader) {
>> + evsel__set_leader(evsel, evsel);
>> + evsel->core.nr_members = 0;
>> + leader->core.nr_members--;
>> + }
>
> Should we also have:
>
> if (leader->core.nr_members == 1)
> leader->core.nr_members = 0;
>
> Other wise say:
>
> {instructions,cycles}
>
> with a remove of cycles becomes:
>
> {instructions}, cycles
>
> rather than the previous:
>
> instructions,cycles
>
> Actually, looking at:
> https://lore.kernel.org/lkml/20220512061308.1152233-2-irogers@google.com/
>
> + /* Reset the leader count if all entries were removed. */
> + if (leader->core.nr_members)
> + leader->core.nr_members = 0;
>
> is wrong and should be:
>
> + /* Reset the leader count if all entries were removed. */
> + if (leader->core.nr_members == 1)
> + leader->core.nr_members = 0;
>

For a perf metrics topdown group, the leader's nr_members must be > 1
after the reset. We should not clear it.
For the other weak group, the leader's nr_members should equal to 1
after the reset. We only need to clear it for this case.
I think it makes sense.


Thanks,
Kan
> I'll fix and re-send.
> > Thanks,
> Ian
>
>> }
>> diff --git a/tools/perf/util/evsel.h b/tools/perf/util/evsel.h
>> index a36172ed4cf6..47f65f8e7c74 100644
>> --- a/tools/perf/util/evsel.h
>> +++ b/tools/perf/util/evsel.h
>> @@ -483,7 +483,7 @@ bool evsel__has_leader(struct evsel *evsel, struct evsel *leader);
>> bool evsel__is_leader(struct evsel *evsel);
>> void evsel__set_leader(struct evsel *evsel, struct evsel *leader);
>> int evsel__source_count(const struct evsel *evsel);
>> -bool evsel__must_be_in_group(const struct evsel *evsel);
>> +void evsel__remove_from_group(struct evsel *evsel, struct evsel *leader);
>>
>> bool arch_evsel__must_be_in_group(const struct evsel *evsel);
>>
>> --
>> 2.35.1
>>

\
 
 \ /
  Last update: 2022-05-17 15:44    [W:0.665 / U:0.064 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site