lkml.org 
[lkml]   [2020]   [Feb]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [patch V3 06/22] bpf/trace: Remove redundant preempt_disable from trace_call_bpf()
On Mon, Feb 24, 2020 at 03:01:37PM +0100, Thomas Gleixner wrote:
> Similar to __bpf_trace_run this is redundant because __bpf_trace_run() is
> invoked from a trace point via __DO_TRACE() which already disables
> preemption _before_ invoking any of the functions which are attached to a
> trace point.
>
> Remove it and add a cant_sleep() check.
>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> ---
> V3: New patch. Replaces the previous one which converted this to migrate_disable()
> ---
> kernel/trace/bpf_trace.c | 3 +--
> 1 file changed, 1 insertion(+), 2 deletions(-)
>
> --- a/kernel/trace/bpf_trace.c
> +++ b/kernel/trace/bpf_trace.c
> @@ -83,7 +83,7 @@ unsigned int trace_call_bpf(struct trace
> if (in_nmi()) /* not supported yet */
> return 1;
>
> - preempt_disable();
> + cant_sleep();
>
> if (unlikely(__this_cpu_inc_return(bpf_prog_active) != 1)) {
> /*
> @@ -115,7 +115,6 @@ unsigned int trace_call_bpf(struct trace
>
> out:
> __this_cpu_dec(bpf_prog_active);
> - preempt_enable();

My testing uncovered that above was too aggressive:
[ 41.533438] BUG: assuming atomic context at kernel/trace/bpf_trace.c:86
[ 41.534265] in_atomic(): 0, irqs_disabled(): 0, pid: 2348, name: test_progs
[ 41.536907] Call Trace:
[ 41.537167] dump_stack+0x75/0xa0
[ 41.537546] __cant_sleep.cold.105+0x8b/0xa3
[ 41.538018] ? exit_to_usermode_loop+0x77/0x140
[ 41.538493] trace_call_bpf+0x4e/0x2e0
[ 41.538908] __uprobe_perf_func.isra.15+0x38f/0x690
[ 41.539399] ? probes_profile_seq_show+0x220/0x220
[ 41.539962] ? __mutex_lock_slowpath+0x10/0x10
[ 41.540412] uprobe_dispatcher+0x5de/0x8f0
[ 41.540875] ? uretprobe_dispatcher+0x7c0/0x7c0
[ 41.541404] ? down_read_killable+0x200/0x200
[ 41.541852] ? __kasan_kmalloc.constprop.6+0xc1/0xd0
[ 41.542356] uprobe_notify_resume+0xacf/0x1d60

The following fixes it:

commit 7b7b71ff43cc0b15567b60c38a951c8a2cbc97f0 (HEAD -> bpf-next)
Author: Alexei Starovoitov <ast@kernel.org>
Date: Mon Feb 24 11:27:15 2020 -0800

bpf: disable migration for bpf progs attached to uprobe

trace_call_bpf() no longer disables preemption on its own.
All callers of this function has to do it explicitly.

Signed-off-by: Alexei Starovoitov <ast@kernel.org>

diff --git a/kernel/trace/trace_uprobe.c b/kernel/trace/trace_uprobe.c
index 18d16f3ef980..7581f5eb6091 100644
--- a/kernel/trace/trace_uprobe.c
+++ b/kernel/trace/trace_uprobe.c
@@ -1333,8 +1333,15 @@ static void __uprobe_perf_func(struct trace_uprobe *tu,
int size, esize;
int rctx;

- if (bpf_prog_array_valid(call) && !trace_call_bpf(call, regs))
- return;
+ if (bpf_prog_array_valid(call)) {
+ u32 ret;
+
+ migrate_disable();
+ ret = trace_call_bpf(call, regs);
+ migrate_enable();
+ if (!ret)
+ return;
+ }

But looking at your patch cant_sleep() seems unnecessary strong.
Should it be cant_migrate() instead?
And two calls to __this_cpu*() replaced with this_cpu*() ?
If you can ack it I can fix it up in place and apply the whole thing.
That was the only issue I found.

\
 
 \ /
  Last update: 2020-02-24 20:40    [W:0.145 / U:0.288 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site