lkml.org 
[lkml]   [2019]   [Apr]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] bpf: Fix preempt_enable_no_resched() abuse
On Tue, Apr 23, 2019 at 09:55:59PM +0200, Peter Zijlstra wrote:
> On Tue, Apr 23, 2019 at 03:41:32PM -0400, Waiman Long wrote:
> > On 4/23/19 3:34 PM, Peter Zijlstra wrote:
> > > On Tue, Apr 23, 2019 at 03:12:16PM -0400, Waiman Long wrote:
>
> > >> You are right on that. However, there is a variant called
> > >> preempt_enable_no_resched() that doesn't have this side effect. So I am
> > >> going to use that one instead.
> > > Only if the very next line is schedule(). Otherwise you're very much not
> > > going to use that function.
> >
> > May I know the reason why.
>
> Because it can 'consume' a need_resched and introduces arbitrary delays
> before the schedule() eventually happens, breaking the very notion of
> PREEMPT=y (and the fundamentals RT relies on).
>
> > I saw a number of instances of
> > preempt_enable_no_resched() without right next a schedule().
>
> Look more closely.. and let me know, if true, those are bugs that need
> fixing.
>
> Argghhh.. BPF...
>
> Also, with the recent RCU rework, we can probably drop that
> rcu_read_lock()/rcu_read_unlock() from there if we're disabling
> preemption anyway.
>
> ---
> Subject: bpf: Fix preempt_enable_no_resched() abuse
>
> Unless the very next line is schedule(), or implies it, one must not use
> preempt_enable_no_resched(). It can cause a preemption to go missing and
> thereby cause arbitrary delays, breaking the PREEMPT=y invariant.
>
> Cc: Roman Gushchin <guro@fb.com>
> Cc: Alexei Starovoitov <ast@kernel.org>
> Cc: Daniel Borkmann <daniel@iogearbox.net>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> ---
>
> diff --git a/include/linux/bpf.h b/include/linux/bpf.h
> index f02367faa58d..944ccc310201 100644
> --- a/include/linux/bpf.h
> +++ b/include/linux/bpf.h
> @@ -510,7 +510,7 @@ int bpf_prog_array_copy(struct bpf_prog_array __rcu *old_array,
> } \
> _out: \
> rcu_read_unlock(); \
> - preempt_enable_no_resched(); \
> + preempt_enable(); \
> _ret; \

Applied to bpf tree. Thanks!
It should have been fixed long ago. Not sure how we kept forgetting about it.

\
 
 \ /
  Last update: 2019-04-25 23:25    [W:0.179 / U:0.412 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site