lkml.org 
[lkml]   [2019]   [Aug]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH] perf/x86: Consider pinned events for group validation
Date
From: Kan Liang <kan.liang@linux.intel.com>

perf stat -M metrics relies on weak groups to reject unschedulable
groups and run them as non-groups.
This uses the group validation code in the kernel. Unfortunately
that code doesn't take pinned events, such as the NMI watchdog, into
account. So some groups can pass validation, but then later still
never schedule.

For example,

$echo 1 > /proc/sys/kernel/nmi_watchdog
$perf stat -M Page_Walks_Utilization

Performance counter stats for 'system wide':

<not counted> itlb_misses.walk_pending
(0.00%)
<not counted> dtlb_load_misses.walk_pending
(0.00%)
<not counted> dtlb_store_misses.walk_pending
(0.00%)
<not counted> ept.walk_pending
(0.00%)
<not counted> cycles
(0.00%)

1.176613558 seconds time elapsed

Current pinned events are always scheduled first. So the new group must
can be scheduled together with current pinned events. Otherwise, it will
never get a chance to be scheduled later.
The trick is to pretend the current pinned events as part of the new
group, and insert them into the fake_cpuc.
The simulation result will tell if they can be scheduled successfully.
The fake_cpuc never touch event state. The current pinned events will
not be impacted.

It won't catch all possible cases that cannot be scheduled, such as
events pinned differently on different CPUs, or complicated constraints.
But for the most common case, the NMI watchdog interacting with the
current perf metrics, it is strong enough.

After applying the patch,

$echo 1 > /proc/sys/kernel/nmi_watchdog
$ perf stat -M Page_Walks_Utilization

Performance counter stats for 'system wide':

2,491,910 itlb_misses.walk_pending # 0.0
Page_Walks_Utilization (79.94%)
13,630,942 dtlb_load_misses.walk_pending
(80.02%)
207,255 dtlb_store_misses.walk_pending
(80.04%)
0 ept.walk_pending
(80.04%)
236,204,924 cycles
(79.97%)

0.901785713 seconds time elapsed

Reported-by: Stephane Eranian <eranian@google.com>
Suggested-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Kan Liang <kan.liang@linux.intel.com>
---
arch/x86/events/core.c | 22 +++++++++++++++++++++-
1 file changed, 21 insertions(+), 1 deletion(-)

diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index 81b005e..c8ed441 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -2011,9 +2011,11 @@ static int validate_event(struct perf_event *event)
*/
static int validate_group(struct perf_event *event)
{
+ struct cpu_hw_events *cpuc = this_cpu_ptr(&cpu_hw_events);
struct perf_event *leader = event->group_leader;
struct cpu_hw_events *fake_cpuc;
- int ret = -EINVAL, n;
+ struct perf_event *pinned_event;
+ int ret = -EINVAL, n, i;

fake_cpuc = allocate_fake_cpuc();
if (IS_ERR(fake_cpuc))
@@ -2033,6 +2035,24 @@ static int validate_group(struct perf_event *event)
if (n < 0)
goto out;

+ /*
+ * The new group must can be scheduled
+ * together with current pinned events.
+ * Otherwise, it will never get a chance
+ * to be scheduled later.
+ */
+ for (i = 0; i < cpuc->n_events; i++) {
+ pinned_event = cpuc->event_list[i];
+ if (WARN_ON_ONCE(!pinned_event))
+ continue;
+ if (!pinned_event->attr.pinned)
+ continue;
+ fake_cpuc->n_events = n;
+ n = collect_events(fake_cpuc, pinned_event, false);
+ if (n < 0)
+ goto out;
+ }
+
fake_cpuc->n_events = 0;
ret = x86_pmu.schedule_events(fake_cpuc, n, NULL);

--
2.7.4
\
 
 \ /
  Last update: 2019-08-16 19:50    [W:0.073 / U:0.272 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site