lkml.org 
[lkml]   [2021]   [Jul]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH 4/4] perf stat: Enable BPF counter with --for-each-cgroup
On Wed, Jun 30, 2021 at 1:09 PM Namhyung Kim <namhyung@kernel.org> wrote:
>
> Hi Song,
>
> On Wed, Jun 30, 2021 at 11:47 AM Song Liu <songliubraving@fb.com> wrote:
> >
> >
> >
> > > On Jun 25, 2021, at 12:18 AM, Namhyung Kim <namhyung@kernel.org> wrote:
> > >
> > > Recently bperf was added to use BPF to count perf events for various
> > > purposes. This is an extension for the approach and targetting to
> > > cgroup usages.
> > >
> > > Unlike the other bperf, it doesn't share the events with other
> > > processes but it'd reduce unnecessary events (and the overhead of
> > > multiplexing) for each monitored cgroup within the perf session.
> > >
> > > When --for-each-cgroup is used with --bpf-counters, it will open
> > > cgroup-switches event per cpu internally and attach the new BPF
> > > program to read given perf_events and to aggregate the results for
> > > cgroups. It's only called when task is switched to a task in a
> > > different cgroup.
> > >
> > > Cc: Song Liu <songliubraving@fb.com>
> > > Signed-off-by: Namhyung Kim <namhyung@kernel.org>
> > > ---
> > > tools/perf/Makefile.perf | 17 +-
> > > tools/perf/util/Build | 1 +
> > > tools/perf/util/bpf_counter.c | 5 +
> > > tools/perf/util/bpf_counter_cgroup.c | 299 ++++++++++++++++++++
> > > tools/perf/util/bpf_skel/bperf_cgroup.bpf.c | 191 +++++++++++++
> > > tools/perf/util/cgroup.c | 2 +
> > > tools/perf/util/cgroup.h | 1 +
> > > 7 files changed, 515 insertions(+), 1 deletion(-)
> > > create mode 100644 tools/perf/util/bpf_counter_cgroup.c
> > > create mode 100644 tools/perf/util/bpf_skel/bperf_cgroup.bpf.c
> >
> > [...]
> >
> > > diff --git a/tools/perf/util/bpf_counter_cgroup.c b/tools/perf/util/bpf_counter_cgroup.c
> > > new file mode 100644
> > > index 000000000000..327f97a23a84
> > > --- /dev/null
> > > +++ b/tools/perf/util/bpf_counter_cgroup.c
> > > @@ -0,0 +1,299 @@
> > > +// SPDX-License-Identifier: GPL-2.0
> > > +
> > > +/* Copyright (c) 2019 Facebook */
> >
> > I am not sure whether this ^^^ is accurate.
>
> Well, I just copied it from the bpf_counter.c file which was the base
> of this patch. Now I don't think I have many lines of code directly
> came from the origin.
>
> So I'm not sure what I can do. Do you want to update the
> copyright year to 2021? Or are you ok with removing the
> line at all?
>

> > [...]
> >
> > > +
> > > +/*
> > > + * trigger the leader prog on each cpu, so the cgrp_reading map could get
> > > + * the latest results.
> > > + */
> > > +static int bperf_cgrp__sync_counters(struct evlist *evlist)
> > > +{
> > > + int i, cpu;
> > > + int nr_cpus = evlist->core.all_cpus->nr;
> > > + int prog_fd = bpf_program__fd(skel->progs.trigger_read);
> > > +
> > > + for (i = 0; i < nr_cpus; i++) {
> > > + cpu = evlist->core.all_cpus->map[i];
> > > + bperf_trigger_reading(prog_fd, cpu);
> > > + }
> > > +
> > > + return 0;
> > > +}
> > > +
> > > +static int bperf_cgrp__enable(struct evsel *evsel)
> > > +{
> >
> > Do we need to call bperf_cgrp__sync_counters() before setting enabled to 1?
> > If we don't, we may count some numbers before setting enabled to 1, no?
>
> Actually it'll update the prev_readings even if enabled = 0.
> So I think it should get the correct counts after setting it to 1
> without the bperf_cgrp__sync_counters().

I thought about this again, and you're right. Will change.

Thanks,
Namhyung

\
 
 \ /
  Last update: 2021-07-01 22:18    [W:0.059 / U:0.048 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site