lkml.org 
[lkml]   [2011]   [Aug]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH] perf_event: fix slow and broken cgroup context switch code
From
Date
On Thu, 2011-08-25 at 15:58 +0200, Stephane Eranian wrote:
> +static inline void perf_event_task_sched_out(struct task_struct
> *prev,
> + struct task_struct *next)
> {
> perf_sw_event(PERF_COUNT_SW_CONTEXT_SWITCHES, 1, NULL, 0);
>
> - __perf_event_task_sched_out(task, next);
> + if (static_branch(&perf_sched_events))
> + __perf_event_task_sched_out(prev, next);
> }

Right, so the reason we removed the static branch from there is

lkml.kernel.org/r/20110324164436.GC1930@jolsa.brq.redhat.com

now I think the series 075e0b0085 to 64ce312618e should have cured that
problem, and adding the static_branch() is now safe again. But there's
no mention of any of this in the Changelog.

Also, the adding back of the static_branch() is mostly unrelated to the
rest of the patch, which I shall now stare at :-)


\
 
 \ /
  Last update: 2011-08-25 16:23    [W:0.193 / U:0.208 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site