lkml.org 
[lkml]   [2018]   [Apr]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC tip/locking/lockdep v6 19/20] rcu: Equip sleepable RCU with lockdep dependency graph checks
On Thu, Apr 12, 2018 at 10:12:33AM +0800, Boqun Feng wrote:
> A trivial fix/hack would be adding local_irq_disable() and
> local_irq_enable() around srcu_lock_sync() like:
>
> static inline void srcu_lock_sync(struct lockdep_map *map)
> {
> local_irq_disable();
> lock_map_acquire(map);
> lock_map_release(map);
> local_irq_enable();
> }
>
> However, it might be better, if lockdep could provide some annotation
> API for such an empty critical section to say the grap-and-drop is
> atomic. Something like:
>
> /*
> * Annotate a wait point for all previous critical section to
> * go out.
> *
> * This won't make @map a irq unsafe lock, no matter it's called
> * w/ or w/o irq disabled.
> */
> lock_wait_unlock(struct lockdep_map *map, ..)
>
> And in this primitive, we do something similar like
> lock_acquire()+lock_release(). This primitive could be used elsewhere,
> as I bebieve we have several empty grab-and-drop critical section for
> lockdep annotations, e.g. in start_flush_work().
>
> Thoughts?
>
> This cerntainly requires a bit more work, in the meanwhile, I will add
> another self testcase which has a srcu_read_lock() called in irq.

Yeah, I've never really bothered to clean those things up, but I don't
see any reason to stop you from doing it ;-)

As to the initial pattern with disabling IRQs, I think I've seen code
like that before, and in general performance isn't a top priority
(within reason) when you're running lockdep kernels, so I've usually let
it be.

\
 
 \ /
  Last update: 2018-04-12 11:12    [W:0.070 / U:0.188 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site