Messages in this thread Patch in this message | | | From | Petr Mladek <> | Subject | [PATCH 3/4] ftrace: Always destroy trampoline when shutting down the trace | Date | Mon, 27 Jun 2016 15:54:36 +0200 |
| |
If have got the following kmemleak report when I enabled the ftrace self test:
unreferenced object 0xffffffffa000a000 (size 179): comm "swapper/0", pid 1, jiffies 4294892507 (age 82553.780s) hex dump (first 32 bytes): 55 ff 74 24 10 55 48 89 e5 ff 74 24 18 55 48 89 U.t$.UH...t$.UH. e5 48 81 ec a8 00 00 00 48 89 44 24 50 48 89 4c .H......H.D$PH.L backtrace: [<ffffffff81915918>] kmemleak_alloc+0x28/0x50 [<ffffffff811d7474>] __vmalloc_node_range+0x184/0x270 [<ffffffff8104a463>] module_alloc+0x63/0x70 [<ffffffff81046726>] arch_ftrace_update_trampoline+0x96/0x230 [<ffffffff81149de6>] ftrace_startup+0x76/0x200 [<ffffffff8114a330>] register_ftrace_function+0x50/0x70 [<ffffffff8118e312>] trace_selftest_ops+0x1b8/0x308 [<ffffffff81ff8b20>] trace_selftest_startup_function+0x25b/0x4a8 [<ffffffff81ff9002>] register_tracer+0x1a7/0x2ec [<ffffffff81ff9753>] init_function_trace+0x90/0x92 [<ffffffff810003b8>] do_one_initcall+0x88/0x1c0 [<ffffffff81fd22dc>] kernel_init_freeable+0x1f9/0x289 [<ffffffff81912d1e>] kernel_init+0xe/0x110 [<ffffffff81920cf2>] ret_from_fork+0x22/0x50 [<ffffffffffffffff>] 0xffffffffffffffff
It is ops->trampoline that gets allocated for dyn_ops in trace_selftest_ops(). It is from the 2nd round when cnt == 2. In this case, the default trampoline must be used because there are two callbacks added to all functions, namely trace_selftest_test_global_func() and trace_selftest_test_dyn_func().
It seems that we forget to free ops->trampoline when the ftrace function gets unregistered and the trampoline is not being used.
This patch creates ftrace_ops_free() to share the code and avoid such problems in the future.
Signed-off-by: Petr Mladek <pmladek@suse.com> --- kernel/trace/ftrace.c | 62 ++++++++++++++++++++++++++------------------------- 1 file changed, 32 insertions(+), 30 deletions(-)
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c index a6804823a058..39f90bdb0f44 100644 --- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@ -2678,6 +2678,36 @@ static int ftrace_startup(struct ftrace_ops *ops, int command) return 0; } +/* + * Dynamic ops may be freed, we must make sure that all callers are done + * before leaving this function. The same goes for freeing the per_cpu + * data of the per_cpu ops. + * + * Again, normal synchronize_sched() is not good enough. We need to do + * a hard force of sched synchronization. This is because we use + * preempt_disable() to do RCU, but the function tracers can be called + * where RCU is not watching (like before user_exit()). We can not rely + * on the RCU infrastructure to do the synchronization, thus we must do + * it ourselves. + * + * The synchronization is not needed when the function tracing is not + * active at the moment. + */ +static void ftrace_ops_free(struct ftrace_ops *ops, bool sync) +{ + if (!(ops->flags & (FTRACE_OPS_FL_DYNAMIC | FTRACE_OPS_FL_PER_CPU))) + return; + + if (sync) + schedule_on_each_cpu(ftrace_sync); + + if (ops->flags & FTRACE_OPS_FL_DYNAMIC) + arch_ftrace_trampoline_free(ops); + + if (ops->flags & FTRACE_OPS_FL_PER_CPU) + per_cpu_ops_free(ops); +} + static int ftrace_shutdown(struct ftrace_ops *ops, int command) { int ret; @@ -2711,14 +2741,7 @@ static int ftrace_shutdown(struct ftrace_ops *ops, int command) } if (!command || !ftrace_enabled) { - /* - * If these are per_cpu ops, they still need their - * per_cpu field freed. Since, function tracing is - * not currently active, we can just free them - * without synchronizing all CPUs. - */ - if (ops->flags & FTRACE_OPS_FL_PER_CPU) - per_cpu_ops_free(ops); + ftrace_ops_free(ops, false); return 0; } @@ -2756,28 +2779,7 @@ static int ftrace_shutdown(struct ftrace_ops *ops, int command) removed_ops = NULL; ops->flags &= ~FTRACE_OPS_FL_REMOVING; - /* - * Dynamic ops may be freed, we must make sure that all - * callers are done before leaving this function. - * The same goes for freeing the per_cpu data of the per_cpu - * ops. - * - * Again, normal synchronize_sched() is not good enough. - * We need to do a hard force of sched synchronization. - * This is because we use preempt_disable() to do RCU, but - * the function tracers can be called where RCU is not watching - * (like before user_exit()). We can not rely on the RCU - * infrastructure to do the synchronization, thus we must do it - * ourselves. - */ - if (ops->flags & (FTRACE_OPS_FL_DYNAMIC | FTRACE_OPS_FL_PER_CPU)) { - schedule_on_each_cpu(ftrace_sync); - - arch_ftrace_trampoline_free(ops); - - if (ops->flags & FTRACE_OPS_FL_PER_CPU) - per_cpu_ops_free(ops); - } + ftrace_ops_free(ops, true); return 0; } -- 1.8.5.6
| |