lkml.org 
[lkml]   [2011]   [Oct]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    SubjectRe: lockdep recursive locking detected (rcu_kthread / __cache_free)
    From
    Date
    On Tue, 2011-10-04 at 09:28 -0500, Christoph Lameter wrote:
    > On Mon, 3 Oct 2011, Paul E. McKenney wrote:
    >
    > > On Mon, Oct 03, 2011 at 03:46:11PM -0500, Christoph Lameter wrote:
    > > > On Mon, 3 Oct 2011, Paul E. McKenney wrote:
    > > >
    > > > > The first lock was acquired here in an RCU callback. The later lock that
    > > > > lockdep complained about appears to have been acquired from a recursive
    > > > > call to __cache_free(), with no help from RCU. This looks to me like
    > > > > one of the issues that arise from the slab allocator using itself to
    > > > > allocate slab metadata.
    > > >
    > > > Right. However, this is a false positive since the slab cache with
    > > > the metadata is different from the slab caches with the slab data. The slab
    > > > cache with the metadata does not use itself any metadata slab caches.
    > >
    > > Wouldn't it be possible to pass a new flag to the metadata slab caches
    > > upon creation so that their locks could be placed in a separate lock
    > > class? Just allocate a separate lock_class_key structure for each such
    > > lock in that case, and then use lockdep_set_class_and_name to associate
    > > that structure with the corresponding lock. I do this in kernel/rcutree.c
    > > in order to allow the rcu_node tree's locks to nest properly.
    >
    > We could give the kmalloc array a different class from created slab
    > caches. That should have the desired effect.
    >
    > But that seems to be already the case (looking at init_node_lock_keys).
    > Non OFF_SLAB caches seem to be getting a different lock class? Why is this
    > not working?
    >
    > static void init_node_lock_keys(int q)
    > {
    > struct cache_sizes *s = malloc_sizes;
    >
    > if (g_cpucache_up != FULL)
    > return;
    >
    > for (s = malloc_sizes; s->cs_size != ULONG_MAX; s++) {
    > struct kmem_list3 *l3;
    >
    > l3 = s->cs_cachep->nodelists[q];
    > if (!l3 || OFF_SLAB(s->cs_cachep))
    > continue;
    >
    > slab_set_lock_classes(s->cs_cachep, &on_slab_l3_key,
    > &on_slab_alc_key, q);
    > }
    > }

    Right, so we recently poked at this to fix some other splats, see:

    30765b92ada267c5395fc788623cb15233276f5c
    83835b3d9aec8e9f666d8223d8a386814f756266

    It could of course be I got confused and broke stuff instead, could
    someone who knows slab (I guess that's either Pekka, Christoph or David)
    stare at those patches?


    \
     
     \ /
      Last update: 2011-10-04 16:43    [W:0.025 / U:0.456 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site