lkml.org 
[lkml]   [2009]   [Nov]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: lockdep complaints in slab allocator
On Fri, Nov 20, 2009 at 01:05:58PM +0200, Pekka Enberg wrote:
> Peter Zijlstra kirjoitti:
>> On Fri, 2009-11-20 at 12:38 +0200, Pekka Enberg wrote:
>>>
>>> On Fri, Nov 20, 2009 at 11:25 AM, Peter Zijlstra <peterz@infradead.org>
>>> wrote:
>>>> 2) propagate the nesting information and user spin_lock_nested(), given
>>>> that slab is already a rat's nest, this won't make it any less obvious.
>>> spin_lock_nested() doesn't really help us here because there's a
>>> _real_ possibility of a recursive spin lock here, right?
>> Well, I was working under the assumption that your analysis of it being
>> a false positive was right ;-)
>> I briefly tried to verify that, but got lost and gave up, at which point
>> I started looking for ways to annotate.
>
> Uh, ok, so apparently I was right after all. There's a comment in
> free_block() above the slab_destroy() call that refers to the comment above
> alloc_slabmgmt() function definition which explains it all.
>
> Long story short: ->slab_cachep never points to the same kmalloc cache
> we're allocating or freeing from. Where do we need to put the
> spin_lock_nested() annotation? Would it be enough to just use it in
> cache_free_alien() for alien->lock or do we need it in cache_flusharray()
> as well?

Hmmm... If the nc->lock spinlocks are always from different slabs
(as alloc_slabmgmt()'s block comment claims), why not just give each
array_cache structure's lock its own struct lock_class_key? They
are zero size unless you have lockdep enabled.

Thanx, Paul


\
 
 \ /
  Last update: 2009-11-20 15:51    [W:0.080 / U:0.632 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site