lkml.org 
[lkml]   [2013]   [Aug]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 6/8] perf: Account freq events per cpu
On Thu, Aug 01, 2013 at 03:31:55PM +0200, Peter Zijlstra wrote:
> On Thu, Aug 01, 2013 at 02:46:58PM +0200, Jiri Olsa wrote:
> > On Tue, Jul 23, 2013 at 02:31:04AM +0200, Frederic Weisbecker wrote:
> > > This is going to be used by the full dynticks subsystem
> > > as a finer-grained information to know when to keep and
> > > when to stop the tick.
> > >
> > > Original-patch-by: Peter Zijlstra <peterz@infradead.org>
> > > Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com>
> > > Cc: Jiri Olsa <jolsa@redhat.com>
> > > Cc: Peter Zijlstra <peterz@infradead.org>
> > > Cc: Namhyung Kim <namhyung@kernel.org>
> > > Cc: Ingo Molnar <mingo@kernel.org>
> > > Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
> > > Cc: Stephane Eranian <eranian@google.com>
> > > ---
> > > kernel/events/core.c | 7 +++++++
> > > 1 files changed, 7 insertions(+), 0 deletions(-)
> > >
> > > diff --git a/kernel/events/core.c b/kernel/events/core.c
> > > index b40c3db..f9bd39b 100644
> > > --- a/kernel/events/core.c
> > > +++ b/kernel/events/core.c
> > > @@ -141,6 +141,7 @@ enum event_type_t {
> > > struct static_key_deferred perf_sched_events __read_mostly;
> > > static DEFINE_PER_CPU(atomic_t, perf_cgroup_events);
> > > static DEFINE_PER_CPU(atomic_t, perf_branch_stack_events);
> > > +static DEFINE_PER_CPU(atomic_t, perf_freq_events);
> > >
> > > static atomic_t nr_mmap_events __read_mostly;
> > > static atomic_t nr_comm_events __read_mostly;
> > > @@ -3139,6 +3140,9 @@ static void unaccount_event_cpu(struct perf_event *event, int cpu)
> > > }
> > > if (is_cgroup_event(event))
> > > atomic_dec(&per_cpu(perf_cgroup_events, cpu));
> > > +
> > > + if (event->attr.freq)
> > > + atomic_dec(&per_cpu(perf_freq_events, cpu));
> > > }
> > >
> > > static void unaccount_event(struct perf_event *event)
> > > @@ -6473,6 +6477,9 @@ static void account_event_cpu(struct perf_event *event, int cpu)
> > > }
> > > if (is_cgroup_event(event))
> > > atomic_inc(&per_cpu(perf_cgroup_events, cpu));
> > > +
> > > + if (event->attr.freq)
> > > + atomic_inc(&per_cpu(perf_freq_events, cpu));
> >
> > cpu could be -1 in here.. getting:
>
> Ho humm, right you are.
>
> So we have:
>
> static void account_event_cpu(struct perf_event *event, int cpu)
> {
> if (event->parent)
> return;
>
> if (has_branch_stack(event)) {
> if (!(event->attach_state & PERF_ATTACH_TASK))
> atomic_inc(&per_cpu(perf_branch_stack_events, cpu));
> }
> if (is_cgroup_event(event))
> atomic_inc(&per_cpu(perf_cgroup_events, cpu));
>
> if (event->attr.freq)
> atomic_inc(&per_cpu(perf_freq_events, cpu));
> }
>
> Where the freq thing is new and shiney, but we already had the other
> two. Of those, cgroup events must be per cpu so that should be good,
> the branch_stack thing tests ATTACH_TASK, which should also be good, but
> leaves me wonder wth they do for those that are attached to tasks.

cgroup is cpu only:

SYSCALL(..
if ((flags & PERF_FLAG_PID_CGROUP) && (pid == -1 || cpu == -1))
return -EINVAL;

jirka


\
 
 \ /
  Last update: 2013-08-01 16:21    [W:0.916 / U:0.324 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site