lkml.org 
[lkml]   [2017]   [Sep]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] lockdep: Print proper scenario if cross deadlock detected at acquisition time
On Tue, Sep 19, 2017 at 08:52:06PM +0800, Boqun Feng wrote:
> For a potential deadlock about CROSSRELEASE as follow:
>
> P1 P2
> =========== =============
> lock(A)
> lock(X)
> lock(A)
> commit(X)
>
> A: normal lock, X: cross lock
>
> , we could detect it at two places:
>
> 1. commit time:
>
> We have run P1 first, and have dependency A --> X in graph, and
> then we run P2, and find the deadlock.
>
> 2. acquisition time:
>
> We have run P2 first, and have dependency X --> A, in
> graph(because another P3 may run previously and is acquiring for
> lock X), and then we run P1 and find the deadlock.
>
> In current print_circular_lock_scenario(), for 1) we could print the
> right scenario and note that's a deadlock related to CROSSRELEASE,
> however for 2) we print the scenario as a normal lockdep deadlock,
> instead we print something like:
>
> | [ 35.310179] ======================================================
> | [ 35.310749] WARNING: possible circular locking dependency detected
> | [ 35.310749] 4.13.0-rc4+ #1 Not tainted
> | [ 35.310749] ------------------------------------------------------
> | [ 35.310749] torture_onoff/766 is trying to acquire lock:
> | [ 35.313943] ((complete)&st->done){+.+.}, at: [<ffffffffb905f5a6>] takedown_cpu+0x86/0xf0
> | [ 35.313943]
> | [ 35.313943] but task is already holding lock:
> | [ 35.313943] (sparse_irq_lock){+.+.}, at: [<ffffffffb90c5e42>] irq_lock_sparse+0x12/0x20
> | [ 35.313943]
> | [ 35.313943] which lock already depends on the new lock.
> ...
> | [ 35.313943] other info that might help us debug this:
> | [ 35.313943]
> | [ 35.313943] Possible unsafe locking scenario:
> | [ 35.313943]
> | [ 35.313943] CPU0 CPU1
> | [ 35.313943] ---- ----
> | [ 35.313943] lock(sparse_irq_lock);
> | [ 35.313943] lock((complete)&st->done);
> | [ 35.313943] lock(sparse_irq_lock);
> | [ 35.313943] lock((complete)&st->done);
> | [ 35.313943]
> | [ 35.313943] *** DEADLOCK ***
>
> It's better to print a proper scenario related to CROSSRELEASE to help
> users find their bugs more easily, so improve this.
>
> Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> Cc: Byungchul Park <byungchul.park@lge.com>
> Cc: Steven Rostedt <rostedt@goodmis.org>
> Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> ---
> The sample of print_circular_lock_scenario() is from Paul Mckenney.
>
> kernel/locking/lockdep.c | 17 +++++++++++++++++
> 1 file changed, 17 insertions(+)
>
> diff --git a/kernel/locking/lockdep.c b/kernel/locking/lockdep.c
> index 44c8d0d17170..67a407bcc814 100644
> --- a/kernel/locking/lockdep.c
> +++ b/kernel/locking/lockdep.c
> @@ -1156,6 +1156,23 @@ print_circular_lock_scenario(struct held_lock *src,
> __print_lock_name(target);
> printk(KERN_CONT ");\n");
> printk("\n *** DEADLOCK ***\n\n");
> + } else if (cross_lock(src->instance)) {
> + printk(" Possible unsafe locking scenario by crosslock:\n\n");
> + printk(" CPU0 CPU1\n");
> + printk(" ---- ----\n");
> + printk(" lock(");
> + __print_lock_name(target);
> + printk(KERN_CONT ");\n");
> + printk(" lock(");
> + __print_lock_name(source);
> + printk(KERN_CONT ");\n");
> + printk(" lock(");
> + __print_lock_name(parent == source ? target : parent);
> + printk(KERN_CONT ");\n");
> + printk(" unlock(");
> + __print_lock_name(source);

Sometimes, it gives a wrong scenario. For example:

lock target
lock source
lock parent
lock target
lock parent of parent
lock parent

lock parent of parent
unlock source

We should consider this scenario as well so the report is always true
generally.

> + printk(KERN_CONT ");\n");
> + printk("\n *** DEADLOCK ***\n\n");
> } else {
> printk(" Possible unsafe locking scenario:\n\n");
> printk(" CPU0 CPU1\n");
> --
> 2.14.1

\
 
 \ /
  Last update: 2017-09-27 03:32    [W:0.077 / U:0.488 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site