Messages in this thread Patch in this message | | | From | David Carrillo-Cisneros <> | Subject | [RFC 5/6] perf/core: rotation no longer necessary. Behavior has changed. Beware | Date | Tue, 10 Jan 2017 02:25:01 -0800 |
| |
The sched in/out process updates timestamps and "rotates" ctx->inactive_groups.
This changes the speed at which rotation happens. Before events will rotate one event per interruption, now they will rotate q events each timer interruption. Where q is the number of events added to the pmu per sched in.
Signed-off-by: David Carrillo-Cisneros <davidcc@google.com> --- kernel/events/core.c | 22 +++++----------------- 1 file changed, 5 insertions(+), 17 deletions(-)
diff --git a/kernel/events/core.c b/kernel/events/core.c index c7715b2627a9..f5d9c13b485f 100644 --- a/kernel/events/core.c +++ b/kernel/events/core.c @@ -3642,19 +3642,6 @@ static void perf_adjust_freq_unthr_context(struct perf_event_context *ctx, raw_spin_unlock(&ctx->lock); } -/* - * Round-robin a context's events: - */ -static void rotate_ctx(struct perf_event_context *ctx) -{ - /* - * Rotate the first entry last of non-pinned groups. Rotation might be - * disabled by the inheritance code. - */ - if (!ctx->rotate_disable) - list_rotate_left(&ctx->flexible_groups); -} - static int perf_rotate_context(struct perf_cpu_context *cpuctx) { struct perf_event_context *ctx = NULL; @@ -3681,10 +3668,11 @@ static int perf_rotate_context(struct perf_cpu_context *cpuctx) if (ctx) ctx_sched_out(ctx, cpuctx, EVENT_FLEXIBLE); - rotate_ctx(&cpuctx->ctx); - if (ctx) - rotate_ctx(ctx); - + /* + * A sched out will insert event groups at end of inactive_groups, + * a sched in will schedule events at the beginning of inactive_groups. + * This causes a rotation. + */ perf_event_sched_in(cpuctx, ctx, current); perf_pmu_enable(cpuctx->ctx.pmu); -- 2.11.0.390.gc69c2f50cf-goog
| |