lkml.org 
[lkml]   [2010]   [Nov]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH 3/4] perf-events: Add support for supplementary event registers v3
From
Date
On Thu, 2010-11-18 at 11:47 +0100, Andi Kleen wrote:
> From: Andi Kleen <ak@linux.intel.com>
>
> Intel Nehalem/Westmere have a special OFFCORE_RESPONSE event
> that can be used to monitor any offcore accesses from a core.
> This is a very useful event for various tunings, and it's
> also needed to implement the generic LLC-* events correctly.
>
> Unfortunately this event requires programming a mask in a separate
> register. And worse this separate register is per core, not per
> CPU thread.
>
> This patch adds:
> - Teaches perf_events that OFFCORE_RESPONSE needs extra parameters.
> The extra parameters are passed by user space in the unused upper
> 32bits of the config word.
> - Add support to the Intel perf_event core to schedule per
> core resources. This adds fairly generic infrastructure that
> can be also used for other per core resources.
> The basic code has is patterned after the similar AMD northbridge
> constraints code.
>
> Thanks to Stephane Eranian who pointed out some problems
> in the original version and suggested improvements.
>
> Full git tree:
> git://git.kernel.org/pub/scm/linux/kernel/git/ak/linux-misc-2.6.git perf-offcore2
> Cc: eranian@google.com
> v2: Lots of updates based on review feedback. Also fixes some issues
> v3: Fix hotplug. Handle multiple extra registers. Fix init order.
> Various improvements.

Stuff like that shouldn't be mixed in with the tags and usually goes
below the --- separator since its not supposed to end up in the
committed changelog.

> Signed-off-by: Andi Kleen <ak@linux.intel.com>
> ---
> arch/x86/kernel/cpu/perf_event.c | 70 ++++++++++++
> arch/x86/kernel/cpu/perf_event_intel.c | 192 ++++++++++++++++++++++++++++++++
> include/linux/perf_event.h | 2 +
> 3 files changed, 264 insertions(+), 0 deletions(-)

> @@ -876,6 +944,8 @@ static inline void __x86_pmu_enable_event(struct hw_perf_event *hwc,
> u64 enable_mask)
> {
> wrmsrl(hwc->config_base + hwc->idx, hwc->config | enable_mask);
> + if (hwc->extra_reg)
> + wrmsrl(hwc->extra_reg, hwc->extra_config);
> }

Just wondering, shouldn't we program the extra msr _before_ we flip the
enable bit?

> +static __init int init_intel_percore(void)
> +{
> + int cpu;
> +
> + if (!needs_percore)
> + return 0;
> +
> + intel_percore = alloc_percpu(struct intel_percore);
> + if (!intel_percore)
> + return -ENOMEM;
> + for_each_possible_cpu(cpu)
> + raw_spin_lock_init(&per_cpu_ptr(intel_percore, cpu)->lock);
> +
> + return 0;
> +}
> +/*
> + * Runs later because per cpu allocations don't work early on.
> + */
> +__initcall(init_intel_percore);

I've got a patch moving the whole pmu init to early_initcall(), which is
after mm_init() so it would actually work.


\
 
 \ /
  Last update: 2010-11-18 12:15    [W:2.567 / U:0.248 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site