lkml.org 
[lkml]   [2012]   [Jan]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
Subject[PATCH] perf_events: proposed fix for broken intr throttling mechanism
Hi,

In running some tests with 3.2.0-rc7-tip, I noticed unexpected throttling
notification samples. I was using fixed period with a long enough period
that I could not possibly hit the default limit of 100000 samples/sec/cpu.

I investigated the matter and discovered that the following commit
is the culprit:

commit 0f5a2601284237e2ba089389fd75d67f77626cef
Author: Peter Zijlstra <a.p.zijlstra@chello.nl>
Date: Wed Nov 16 14:38:16 2011 +0100

perf: Avoid a useless pmu_disable() in the perf-tick


The throttling mechanism REQUIRES that the hwc->interrupt counter be reset
at EACH timer tick. This is regardless of the fact that the counter is in fixed
period or frequency mode. The optimization introduced in this patch breaks this
by avoiding calling perf_ctx_adjust_freq() at each timer tick. For events with
fixed period, it would not adjust any period at all BUT it would reset the
throttling counter.

Given the way the throttling mechanism is implemented we cannot avoid doing
some work at each timer tick. Otherwise we loose many samples for no good
reasons.

One may also question the motivation behind checking the interrupt rate at
each timer tick rather than every second, i.e., average it out over a longer
period.

I see two solutions short term:
1 - revert the commit above
2 - special case the situation with no frequency-based sampling event

I have implemented solution 2 with the draft fix below. It does not invoke
perf_pmu_enable()/perf_pmu_disable(). I am not clear on whether or not this
is really needed in this case. Please advise.

Signed-off-by: Stephane Eranian <eranian@google.com>
---

diff --git a/kernel/events/core.c b/kernel/events/core.c
index 91fb68a..d1fe81a 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -2325,6 +2325,37 @@ static void perf_adjust_period(struct perf_event *event, u64 nsec, u64 count)
}
}

+static void perf_ctx_adjust_throttle(struct perf_event_context *ctx)
+{
+ struct perf_event *event;
+ struct hw_perf_event *hwc;
+ u64 interrupts;
+
+ raw_spin_lock(&ctx->lock);
+
+ list_for_each_entry_rcu(event, &ctx->event_list, event_entry) {
+ if (event->state != PERF_EVENT_STATE_ACTIVE)
+ continue;
+
+ if (!event_filter_match(event))
+ continue;
+
+ hwc = &event->hw;
+
+ interrupts = hwc->interrupts;
+ hwc->interrupts = 0;
+
+ /*
+ * unthrottle events on the tick
+ */
+ if (interrupts == MAX_INTERRUPTS) {
+ perf_log_throttle(event, 1);
+ event->pmu->start(event, 0);
+ }
+ }
+ raw_spin_unlock(&ctx->lock);
+}
+
static void perf_ctx_adjust_freq(struct perf_event_context *ctx, u64 period)
{
struct perf_event *event;
@@ -2445,10 +2476,24 @@ void perf_event_task_tick(void)
{
struct list_head *head = &__get_cpu_var(rotation_list);
struct perf_cpu_context *cpuctx, *tmp;
+ struct perf_event_context *ctx;

WARN_ON(!irqs_disabled());

list_for_each_entry_safe(cpuctx, tmp, head, rotation_list) {
+
+ /*
+ * throttling counter must be reset at each tick
+ * unthrottling must be done at each tick
+ */
+ ctx = &cpuctx->ctx;
+ if (!ctx->nr_freq)
+ perf_ctx_adjust_throttle(&cpuctx->ctx);
+
+ ctx = cpuctx->task_ctx;
+ if (ctx && !ctx->nr_freq)
+ perf_ctx_adjust_throttle(ctx);
+
if (cpuctx->jiffies_interval == 1 ||
!(jiffies % cpuctx->jiffies_interval))
perf_rotate_context(cpuctx);

\
 
 \ /
  Last update: 2012-01-04 15:27    [W:0.030 / U:0.192 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site