lkml.org 
[lkml]   [2017]   [Feb]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Date
    Subject[PATCH 3.16 279/306] perf/x86: Fix full width counter, counter overflow
    3.16.40-rc1 review patch.  If anyone has any objections, please let me know.

    ------------------

    From: "Peter Zijlstra (Intel)" <peterz@infradead.org>

    commit 7f612a7f0bc13a2361a152862435b7941156b6af upstream.

    Lukasz reported that perf stat counters overflow handling is broken on KNL/SLM.

    Both these parts have full_width_write set, and that does indeed have
    a problem. In order to deal with counter wrap, we must sample the
    counter at at least half the counter period (see also the sampling
    theorem) such that we can unambiguously reconstruct the count.

    However commit:

    069e0c3c4058 ("perf/x86/intel: Support full width counting")

    sets the sampling interval to the full period, not half.

    Fixing that exposes another issue, in that we must not sign extend the
    delta value when we shift it right; the counter cannot have
    decremented after all.

    With both these issues fixed, counter overflow functions correctly
    again.

    Reported-by: Lukasz Odzioba <lukasz.odzioba@intel.com>
    Tested-by: Liang, Kan <kan.liang@intel.com>
    Tested-by: Odzioba, Lukasz <lukasz.odzioba@intel.com>
    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
    Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
    Cc: Jiri Olsa <jolsa@redhat.com>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Stephane Eranian <eranian@google.com>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Cc: Vince Weaver <vincent.weaver@maine.edu>
    Fixes: 069e0c3c4058 ("perf/x86/intel: Support full width counting")
    Signed-off-by: Ingo Molnar <mingo@kernel.org>
    [bwh: Backported to 3.16: adjust filenames]
    Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
    ---
    arch/x86/kernel/cpu/perf_event.c | 2 +-
    arch/x86/kernel/cpu/perf_event_intel.c | 2 +-
    2 files changed, 2 insertions(+), 2 deletions(-)

    --- a/arch/x86/kernel/cpu/perf_event.c
    +++ b/arch/x86/kernel/cpu/perf_event.c
    @@ -64,7 +64,7 @@ u64 x86_perf_event_update(struct perf_ev
    int shift = 64 - x86_pmu.cntval_bits;
    u64 prev_raw_count, new_raw_count;
    int idx = hwc->idx;
    - s64 delta;
    + u64 delta;

    if (idx == INTEL_PMC_IDX_FIXED_BTS)
    return 0;
    --- a/arch/x86/kernel/cpu/perf_event_intel.c
    +++ b/arch/x86/kernel/cpu/perf_event_intel.c
    @@ -2669,7 +2669,7 @@ __init int intel_pmu_init(void)

    /* Support full width counters using alternative MSR range */
    if (x86_pmu.intel_cap.full_width_write) {
    - x86_pmu.max_period = x86_pmu.cntval_mask;
    + x86_pmu.max_period = x86_pmu.cntval_mask >> 1;
    x86_pmu.perfctr = MSR_IA32_PMC0;
    pr_cont("full-width counters, ");
    }
    \
     
     \ /
      Last update: 2017-02-16 01:21    [W:4.053 / U:0.104 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site