Messages in this thread | | | Date | Wed, 22 May 2019 16:35:45 +0200 | From | Peter Zijlstra <> | Subject | Re: [PATCH v4 1/4] ftrace: Implement fs notification for tracing_max_latency |
| |
On Wed, May 22, 2019 at 02:30:14AM +0200, Viktor Rosendahl wrote:
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c > index 874c427742a9..440cd1a62722 100644 > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -3374,6 +3374,7 @@ static void __sched notrace __schedule(bool preempt) > struct rq *rq; > int cpu; > > + trace_disable_fsnotify(); > cpu = smp_processor_id(); > rq = cpu_rq(cpu); > prev = rq->curr; > @@ -3449,6 +3450,7 @@ static void __sched notrace __schedule(bool preempt) > } > > balance_callback(rq); > + trace_enable_fsnotify(); > } > > void __noreturn do_task_dead(void) > diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c > index 80940939b733..1a38bcdb3652 100644 > --- a/kernel/sched/idle.c > +++ b/kernel/sched/idle.c > @@ -225,6 +225,7 @@ static void cpuidle_idle_call(void) > static void do_idle(void) > { > int cpu = smp_processor_id(); > + trace_disable_fsnotify(); > /* > * If the arch has a polling bit, we maintain an invariant: > * > @@ -284,6 +285,7 @@ static void do_idle(void) > smp_mb__after_atomic(); > > sched_ttwu_pending(); > + /* schedule_idle() will call trace_enable_fsnotify() */ > schedule_idle(); > > if (unlikely(klp_patch_pending(current)))
I still hate this.. why are we doing this? We already have this stop_critical_timings() nonsense and are now adding more gunk.
> +static DEFINE_PER_CPU(atomic_t, notify_disabled) = ATOMIC_INIT(0);
> + atomic_set(&per_cpu(notify_disabled, cpu), 1);
> + atomic_set(&per_cpu(notify_disabled, cpu), 0);
> + if (!atomic_read(&per_cpu(notify_disabled, cpu)))
That's just wrong on so many levels..
| |