lkml.org 
[lkml]   [2018]   [Jan]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[RESEND PATCH V2 3/4] perf/x86/intel: drain PEBS buffer in event read
Date
From: Kan Liang <kan.liang@linux.intel.com>

When the PEBS interrupt threshold is larger than one, there is no way to
get exact auto-reload times and value needed for event update unless
flush the PEBS buffer.

Drain the PEBS buffer in event read when large PEBS is enabled.

For the threshold is one, even auto-reload is enabled, it doesn't need
to be specially handled. Because auto-reload is only effect when event
overflow. There is no overflow in event read.

Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
---
arch/x86/events/intel/core.c | 9 +++++++++
arch/x86/events/intel/ds.c | 10 ++++++++++
arch/x86/events/perf_event.h | 2 ++
3 files changed, 21 insertions(+)

diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index 09c26a4..bdc35f8 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -2060,6 +2060,14 @@ static void intel_pmu_del_event(struct perf_event *event)
intel_pmu_pebs_del(event);
}

+static void intel_pmu_read_event(struct perf_event *event)
+{
+ if (event->attr.precise_ip)
+ return intel_pmu_pebs_read(event);
+
+ x86_perf_event_update(event);
+}
+
static void intel_pmu_enable_fixed(struct hw_perf_event *hwc)
{
int idx = hwc->idx - INTEL_PMC_IDX_FIXED;
@@ -3495,6 +3503,7 @@ static __initconst const struct x86_pmu intel_pmu = {
.disable = intel_pmu_disable_event,
.add = intel_pmu_add_event,
.del = intel_pmu_del_event,
+ .read = intel_pmu_read_event,
.hw_config = intel_pmu_hw_config,
.schedule_events = x86_schedule_events,
.eventsel = MSR_ARCH_PERFMON_EVENTSEL0,
diff --git a/arch/x86/events/intel/ds.c b/arch/x86/events/intel/ds.c
index cc1f373..2027560 100644
--- a/arch/x86/events/intel/ds.c
+++ b/arch/x86/events/intel/ds.c
@@ -926,6 +926,16 @@ void intel_pmu_pebs_del(struct perf_event *event)
pebs_update_state(needed_cb, cpuc, event->ctx->pmu);
}

+void intel_pmu_pebs_read(struct perf_event *event)
+{
+ struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
+
+ if (pebs_needs_sched_cb(cpuc))
+ return intel_pmu_drain_pebs_buffer();
+
+ x86_perf_event_update(event);
+}
+
void intel_pmu_pebs_disable(struct perf_event *event)
{
struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
diff --git a/arch/x86/events/perf_event.h b/arch/x86/events/perf_event.h
index 67426e51..93ec3b4 100644
--- a/arch/x86/events/perf_event.h
+++ b/arch/x86/events/perf_event.h
@@ -928,6 +928,8 @@ void intel_pmu_pebs_add(struct perf_event *event);

void intel_pmu_pebs_del(struct perf_event *event);

+void intel_pmu_pebs_read(struct perf_event *event);
+
void intel_pmu_pebs_enable(struct perf_event *event);

void intel_pmu_pebs_disable(struct perf_event *event);
--
2.7.4
\
 
 \ /
  Last update: 2018-01-14 23:17    [W:0.753 / U:0.352 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site