lkml.org 
[lkml]   [2020]   [Feb]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
Subject[patch V2 10/20] trace/bpf: Use migrate disable in trace_call_bpf()
BPF does not require preemption disable. It only requires to stay on the
same CPU while running a program. Reflect this by replacing
preempt_disable/enable() with migrate_disable/enable() pairs.

On a non-RT kernel this maps to preempt_disable/enable().

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
kernel/trace/bpf_trace.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)

--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -83,7 +83,7 @@ unsigned int trace_call_bpf(struct trace
if (in_nmi()) /* not supported yet */
return 1;

- preempt_disable();
+ migrate_disable();

if (unlikely(__this_cpu_inc_return(bpf_prog_active) != 1)) {
/*
@@ -115,7 +115,7 @@ unsigned int trace_call_bpf(struct trace

out:
__this_cpu_dec(bpf_prog_active);
- preempt_enable();
+ migrate_enable();

return ret;
}
\
 
 \ /
  Last update: 2020-02-20 21:57    [W:0.103 / U:1.080 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site