lkml.org 
[lkml]   [2015]   [Nov]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] bpf: convert hashtab lock to raw lock
On Sat, Oct 31, 2015 at 09:47:36AM -0400, Steven Rostedt wrote:
> On Fri, 30 Oct 2015 17:03:58 -0700
> Alexei Starovoitov <alexei.starovoitov@gmail.com> wrote:
>
> > On Fri, Oct 30, 2015 at 03:16:26PM -0700, Yang Shi wrote:
> > > When running bpf samples on rt kernel, it reports the below warning:
> > >
> > > BUG: sleeping function called from invalid context at kernel/locking/rtmutex.c:917
> > > in_atomic(): 1, irqs_disabled(): 128, pid: 477, name: ping
> > > Preemption disabled at:[<ffff80000017db58>] kprobe_perf_func+0x30/0x228
> > ...
> > > diff --git a/kernel/bpf/hashtab.c b/kernel/bpf/hashtab.c
> > > index 83c209d..972b76b 100644
> > > --- a/kernel/bpf/hashtab.c
> > > +++ b/kernel/bpf/hashtab.c
> > > @@ -17,7 +17,7 @@
> > > struct bpf_htab {
> > > struct bpf_map map;
> > > struct hlist_head *buckets;
> > > - spinlock_t lock;
> > > + raw_spinlock_t lock;
> >
> > How do we address such things in general?
> > I bet there are tons of places around the kernel that
> > call spin_lock from atomic.
> > I'd hate to lose the benefits of lockdep of non-raw spin_lock
> > just to make rt happy.
>
> You wont lose any benefits of lockdep. Lockdep still checks
> raw_spin_lock(). The only difference between raw_spin_lock and
> spin_lock is that in -rt spin_lock turns into an rt_mutex() and
> raw_spin_lock stays a spin lock.

I see. The patch makes sense then.
Would be good to document this peculiarity of spin_lock.



\
 
 \ /
  Last update: 2015-11-02 00:21    [W:0.122 / U:0.532 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site