lkml.org 
[lkml]   [2011]   [Nov]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH RFC] perf, core: disable pmu while context rotation only if needed
On Tue, Nov 15, 2011 at 01:07:13PM +0100, Peter Zijlstra wrote:
> On Tue, 2011-11-15 at 13:34 +0200, Gleb Natapov wrote:
> >
> > Currently pmu is disabled and re-enabled on each timer interrupt even
> > when no rotation or frequency adjustment is needed. On Intel CPU this
> > results in two writes into PERF_GLOBAL_CTRL MSR per tick. On bare metal
> > it does not cause significant slowdown, but when running perf in a virtual
> > machine it leads to 20% slowdown on my machine.
>
>
> I detest asymmetric locking like that, does something like the below
> also work for you?
>
It does.

>
> + if (!rotate && !freq)
> + goto done;
> +
> perf_ctx_lock(cpuctx, cpuctx->task_ctx);
> perf_pmu_disable(cpuctx->ctx.pmu);
> +
> + if (!freq)
> + goto rotate;
> +
Why goto, why not

if (freq) {
> perf_ctx_adjust_freq(&cpuctx->ctx, interval);
> if (ctx)
> perf_ctx_adjust_freq(ctx, interval);
}

And the same with next goto.

>
> +rotate:
> if (!rotate)
> - goto done;
> + goto unlock;
>
> cpu_ctx_sched_out(cpuctx, EVENT_FLEXIBLE);
> if (ctx)
> @@ -2413,12 +2432,13 @@ static void perf_rotate_context(struct perf_cpu_context *cpuctx)
>
> perf_event_sched_in(cpuctx, ctx, current);
>
> +unlock:
> + perf_pmu_enable(cpuctx->ctx.pmu);
> + perf_ctx_unlock(cpuctx, cpuctx->task_ctx);
> +
> done:
> if (remove)
> list_del_init(&cpuctx->rotation_list);
> -
> - perf_pmu_enable(cpuctx->ctx.pmu);
> - perf_ctx_unlock(cpuctx, cpuctx->task_ctx);
> }
>
> void perf_event_task_tick(void)

--
Gleb.


\
 
 \ /
  Last update: 2011-11-15 13:41    [W:0.036 / U:0.040 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site