lkml.org 
[lkml]   [2016]   [Jan]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH v3] lib/spinlock_debug.c: prevent an infinite recursive cycle in spin_dump()
Date
changes from v2 to v3
- avoid printk() only in case of lockup suspected, not real lockup in
which case it does not help at all.
- consider not only console_sem.lock but also logbuf_lock which is used
by printk().

changes from v1 to v2
- only change comment and commit message esp. replacing "deadlock" with
"infinite recursive cycle", since it is not an actual deadlock.

thanks,
byungchul

-----8<-----
From 92c84ea45ac18010804aa09eeb9e03f797a4b2b0 Mon Sep 17 00:00:00 2001
From: Byungchul Park <byungchul.park@lge.com>
Date: Wed, 27 Jan 2016 13:33:24 +0900
Subject: [PATCH v3] lib/spinlock_debug.c: prevent an infinite recursive cycle
in spin_dump()

It causes an infinite recursive cycle when using CONFIG_DEBUG_SPINLOCK,
in the spin_dump(). Backtrace prints printk() -> console_trylock() ->
do_raw_spin_lock() -> spin_dump() -> printk()... infinitely.

When the spin_dump() is called from printk(), we should prevent the
debug spinlock code from calling printk() again in that context. It's
reasonable to avoid printing "lockup suspected" which is just a warning
message but it would cause a real lockup definitely.

However, this patch does not deal with spin_bug(), since avoiding it in
the spin_bug() does not help it at all. Calling spin_bug() nearly means a
real lockup happened!. In that case, it's not helpful.

Signed-off-by: Byungchul Park <byungchul.park@lge.com>
---
kernel/locking/spinlock_debug.c | 16 +++++++++++++---
kernel/printk/printk.c | 6 ++++++
2 files changed, 19 insertions(+), 3 deletions(-)

diff --git a/kernel/locking/spinlock_debug.c b/kernel/locking/spinlock_debug.c
index 0374a59..fefc76c 100644
--- a/kernel/locking/spinlock_debug.c
+++ b/kernel/locking/spinlock_debug.c
@@ -103,6 +103,8 @@ static inline void debug_spin_unlock(raw_spinlock_t *lock)
lock->owner_cpu = -1;
}

+extern int is_printk_lock(raw_spinlock_t *lock);
+
static void __spin_lock_debug(raw_spinlock_t *lock)
{
u64 i;
@@ -113,11 +115,19 @@ static void __spin_lock_debug(raw_spinlock_t *lock)
return;
__delay(1);
}
- /* lockup suspected: */
- spin_dump(lock, "lockup suspected");
+
+ /*
+ * If this function is called from printk(), then we should
+ * not call printk() more. Or it will cause an infinite
+ * recursive cycle!
+ */
+ if (likely(!is_printk_lock(lock))) {
+ /* lockup suspected: */
+ spin_dump(lock, "lockup suspected");
#ifdef CONFIG_SMP
- trigger_all_cpu_backtrace();
+ trigger_all_cpu_backtrace();
#endif
+ }

/*
* The trylock above was causing a livelock. Give the lower level arch
diff --git a/kernel/printk/printk.c b/kernel/printk/printk.c
index 2ce8826..657f8dd 100644
--- a/kernel/printk/printk.c
+++ b/kernel/printk/printk.c
@@ -1981,6 +1981,12 @@ asmlinkage __visible void early_printk(const char *fmt, ...)
}
#endif

+int is_printk_lock(raw_spinlock_t *lock)
+{
+ return (lock == &console_sem.lock) ||
+ (lock == &logbuf_lock) ;
+}
+
static int __add_preferred_console(char *name, int idx, char *options,
char *brl_options)
{
--
1.9.1
\
 
 \ /
  Last update: 2016-01-27 07:21    [W:0.257 / U:0.156 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site