lkml.org 
[lkml]   [2013]   [Aug]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 1/4] perf, x86: Avoid checkpointed counters causing excessive TSX aborts v4
On Thu, Aug 08, 2013 at 06:15:43PM -0700, Andi Kleen wrote:
> +++ b/arch/x86/kernel/cpu/perf_event_intel.c

> @@ -1141,6 +1146,17 @@ static void intel_pmu_enable_event(struct perf_event *event)
> int intel_pmu_save_and_restart(struct perf_event *event)
> {
> x86_perf_event_update(event);
> + /*
> + * For a checkpointed counter always reset back to 0. This
> + * avoids a situation where the counter overflows, aborts the
> + * transaction and is then set back to shortly before the
> + * overflow, and overflows and aborts again.
> + */
> + if (unlikely(event_is_checkpointed(event))) {
> + /* No race with NMIs because the counter should not be armed */
> + wrmsrl(event->hw.event_base, 0);
> + local64_set(&event->hw.prev_count, 0);
> + }

Right, if it wasn't for KVM you could've done a smaller special case
handler for checkpointed events, but as it stands I suppose it makes
sense to use the normal paths.

> return x86_perf_event_set_period(event);
> }
>
> @@ -1224,6 +1240,15 @@ again:
> x86_pmu.drain_pebs(regs);
> }
>
> + /*
> + * To avoid spurious interrupts with perf stat always reset checkpointed
> + * counters.
> + *
> + * XXX move somewhere else.

Like where? Afaict it needs to be here. You could write it prettier I
suppose and I guess we'll eventually need to assume all events can be
checkpointed but I don't see how it could be done elsewhere.

> + */
> + if (cpuc->events[2] && event_is_checkpointed(cpuc->events[2]))
> + status |= (1ULL << 2);
> +
> for_each_set_bit(bit, (unsigned long *)&status, X86_PMC_IDX_MAX) {
> struct perf_event *event = cpuc->events[bit];
>




\
 
 \ /
  Last update: 2013-08-13 12:41    [W:2.147 / U:0.080 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site