lkml.org 
[lkml]   [2020]   [Oct]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: kvm+nouveau induced lockdep gripe
On 2020-10-24 13:00:00 [+0800], Hillf Danton wrote:
>
> Hmm...curious how that word went into your mind. And when?
> > [ 30.457363]
> > other info that might help us debug this:
> > [ 30.457369] Possible unsafe locking scenario:
> >
> > [ 30.457375] CPU0
> > [ 30.457378] ----
> > [ 30.457381] lock(&mgr->vm_lock);
> > [ 30.457386] <Interrupt>
> > [ 30.457389] lock(&mgr->vm_lock);
> > [ 30.457394]
> > *** DEADLOCK ***
> >
> > <snips 999 lockdep lines and zillion ATOMIC_SLEEP gripes>

The backtrace contained the "normal" vm_lock. What should follow is the
backtrace of the in-softirq usage.

>
> Dunno if blocking softint is a right cure.
>
> --- a/drivers/gpu/drm/drm_vma_manager.c
> +++ b/drivers/gpu/drm/drm_vma_manager.c
> @@ -229,6 +229,7 @@ EXPORT_SYMBOL(drm_vma_offset_add);
> void drm_vma_offset_remove(struct drm_vma_offset_manager *mgr,
> struct drm_vma_offset_node *node)
> {
> + local_bh_disable();

There is write_lock_bh(). However changing only one will produce the
same backtrace somewhere else unless all other users already run BH
disabled region.

> write_lock(&mgr->vm_lock);
>
> if (drm_mm_node_allocated(&node->vm_node)) {

Sebastian

\
 
 \ /
  Last update: 2020-10-26 18:31    [W:0.991 / U:0.016 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site