lkml.org 
[lkml]   [2021]   [Jun]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH 1/4] perf: Update the ctx->pmu for a hybrid system
Date
From: Kan Liang <kan.liang@linux.intel.com>

The current generic perf code is not hybrid-friendly. A task perf
context assumes that there is only one perf_hw_context PMU. The PMU
is stored in the ctx->pmu when allocating the perf context. In a
context switch, before scheduling any events, the corresponding PMU
has to be disabled. But on a hybrid system, the task may be scheduled to
a CPU which has a different PMU from the ctx->pmu. The PMU of the new
CPU may not be disabled.

Add a supported_cpus in struct pmu to indicate the supported CPU mask
of the current PMU. When a new task is scheduled, check the supported
CPU of the ctx->pmu. Update the ctx->pmu if the CPU doesn't support it.

For a non-hybrid system or the generic supported_cpus is not implemented
in the specific codes yet, the behavior is not changed.

Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
---
include/linux/perf_event.h | 7 +++++++
kernel/events/core.c | 29 ++++++++++++++++++++++++++++-
2 files changed, 35 insertions(+), 1 deletion(-)

diff --git a/include/linux/perf_event.h b/include/linux/perf_event.h
index f5a6a2f..111b1c1 100644
--- a/include/linux/perf_event.h
+++ b/include/linux/perf_event.h
@@ -301,6 +301,13 @@ struct pmu {
unsigned int nr_addr_filters;

/*
+ * For hybrid systems with PERF_PMU_CAP_HETEROGENEOUS_CPUS capability
+ * @supported_cpus: The supported CPU mask of the current PMU.
+ * Empty means a non-hybrid system or not implemented.
+ */
+ cpumask_t supported_cpus;
+
+ /*
* Fully disable/enable this PMU, can be used to protect from the PMI
* as well as for lazy/batch writing of the MSRs.
*/
diff --git a/kernel/events/core.c b/kernel/events/core.c
index 6fee4a7..b172f54 100644
--- a/kernel/events/core.c
+++ b/kernel/events/core.c
@@ -3817,11 +3817,38 @@ static void cpu_ctx_sched_in(struct perf_cpu_context *cpuctx,
ctx_sched_in(ctx, cpuctx, event_type, task);
}

+/*
+ * Update the ctx->pmu if the new task is scheduled in a CPU
+ * which has a different type of PMU
+ */
+static inline void perf_event_context_update_hybrid(struct perf_event_context *ctx)
+{
+ struct pmu *pmu = ctx->pmu;
+
+ if (cpumask_empty(&pmu->supported_cpus) || cpumask_test_cpu(smp_processor_id(), &pmu->supported_cpus))
+ return;
+
+ rcu_read_lock();
+ list_for_each_entry_rcu(pmu, &pmus, entry) {
+ if ((pmu->task_ctx_nr == perf_hw_context) &&
+ cpumask_test_cpu(smp_processor_id(), &pmu->supported_cpus)) {
+ ctx->pmu = pmu;
+ break;
+ }
+ }
+ rcu_read_unlock();
+ WARN_ON_ONCE(!cpumask_test_cpu(smp_processor_id(), &ctx->pmu->supported_cpus));
+}
+
static void perf_event_context_sched_in(struct perf_event_context *ctx,
struct task_struct *task)
{
struct perf_cpu_context *cpuctx;
- struct pmu *pmu = ctx->pmu;
+ struct pmu *pmu;
+
+ if (ctx->pmu->capabilities & PERF_PMU_CAP_HETEROGENEOUS_CPUS)
+ perf_event_context_update_hybrid(ctx);
+ pmu = ctx->pmu;

cpuctx = __get_cpu_context(ctx);
if (cpuctx->task_ctx == ctx) {
--
2.7.4
\
 
 \ /
  Last update: 2021-06-16 21:11    [W:1.346 / U:0.116 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site