lkml.org 
[lkml]   [2011]   [Mar]   [30]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [RFC] seqlock,lockdep: Add lock primitives to read_seqbegin().
From
Date
On Wed, 2011-03-30 at 17:12 +0900, Tetsuo Handa wrote:
> Peter Zijlstra wrote:
> > On Tue, 2011-03-29 at 15:39 +0200, Peter Zijlstra wrote:
> > >
> > > That said, there are some out-standing issues with rw_locks and lockdep,
> > > Gautham and I worked on that for a while but we never persevered and
> > > finished it..
> > >
> > > http://lkml.org/lkml/2009/5/11/203
> >
> > I just did a quick rebase onto tip/master (compile tested only):
> >
> > http://programming.kicks-ass.net/sekrit/patches-lockdep.tar.bz2
> >
> > That series needs testing and a few patches to extend the
> > lib/locking-selftest* bits to cover the new functionality.
>
> Thanks, but I didn't apply above tarball to 2.6.38.2 because lockdep selftests
> failed.

I probably messed up the last patch, its basically a complete rewrite
because lockdep changed significantly between when that series was
written and now.

> > In order to hit your inversion you need to do something like:
> >
> > cat /proc/locktest1 & cat /proc/locktest2
> >
> > if you do them serialized you'll never hit that inversion.
>
> Yes, I know. But I think that lockdep should report the possibility of hitting
> that inversion even if I do them serialized.

True, my bad.

> So, this is not a bug but intended coding. Then, we want a comment here why
> lockdep annotation is missing.

Nah, ideally we'd fix it by making the VDSO code use another primitive.

> > --- a/include/linux/seqlock.h
> > +++ b/include/linux/seqlock.h
> > @@ -94,6 +94,7 @@ static __always_inline unsigned read_seqbegin(const seqlock_t *sl)
> > cpu_relax();
> > goto repeat;
> > }
> > + rwlock_acquire_read(&sl->lock->dep_map, 0, 0, _RET_IP_);
> >
> > return ret;
> > }
> > @@ -107,6 +108,8 @@ static __always_inline int read_seqretry(const seqlock_t *sl, unsigned start)
> > {
> > smp_rmb();
> >
> > + rwlock_release(&sl->lock->dep_map, 1, _RET_IP_);
> > +
> > return unlikely(sl->sequence != start);
> > }
>
> Excuse me, but the lock embedded into seqlock_t is spinlock rather than rwlock.
> I assume you meant spin_acquire()/spin_release() rather than
> rwlock_acquire_read()/rwlock_release().

No, I meant what I wrote ;-) it doesn't matter to lockdep that its a
spinlock (lockdep doesn't even know that) and in fact rwlock_acquire
(the write version) is identical to spin_acquire() both acquire the lock
in the exclusive state.

The read side of seqlocks is a recursive read lock, hence
rwlock_acquire_read()

> Also, I assume you meant to call
> spin_acquire() before entering the spin state (as with
>
> static inline void __raw_spin_lock(raw_spinlock_t *lock)
> {
> preempt_disable();
> spin_acquire(&lock->dep_map, 0, 0, _RET_IP_);
> LOCK_CONTENDED(lock, do_raw_spin_trylock, do_raw_spin_lock);
> }
>
> . Otherwise, lockdep cannot report it when hit this bug upon the first call to
> this function).

Huh no, of course not, a seqlock read side cannot contend in the classic
sense.


\
 
 \ /
  Last update: 2011-03-30 11:53    [W:0.257 / U:0.020 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site