lkml.org 
[lkml]   [2019]   [May]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH RT 1/4] softirq: Avoid "local_softirq_pending" messages if ksoftirqd is blocked
Date
From: Sebastian Andrzej Siewior <bigeasy@linutronix.de>

v3.18.138-rt116-rc1 stable review patch.
If anyone has any objections, please let me know.

-----------


[ Upstream commit 2cf32c1a3d9352df8017dbf84a1462c4a60a1826 ]

If the ksoftirqd thread has a softirq pending and is blocked on the
`local_softirq_locks' lock then softirq_check_pending_idle() won't
complain because the "lock owner" will mask away this softirq from the
mask of pending softirqs.
If ksoftirqd has an additional softirq pending then it won't be masked
out because we never look at ksoftirqd's mask.

If there are still pending softirqs while going to idle check
ksoftirqd's and ktimersfotd's mask before complaining about unhandled
softirqs.

Cc: stable-rt@vger.kernel.org
Tested-by: Juri Lelli <juri.lelli@redhat.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Tom Zanussi <tom.zanussi@linux.intel.com>
---
kernel/softirq.c | 57 ++++++++++++++++++++++++++++++++++++++++----------------
1 file changed, 41 insertions(+), 16 deletions(-)

diff --git a/kernel/softirq.c b/kernel/softirq.c
index 89c490b405ad..47d228982a58 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -91,6 +91,31 @@ static inline void softirq_clr_runner(unsigned int sirq)
sr->runner[sirq] = NULL;
}

+static bool softirq_check_runner_tsk(struct task_struct *tsk,
+ unsigned int *pending)
+{
+ bool ret = false;
+
+ if (!tsk)
+ return ret;
+
+ /*
+ * The wakeup code in rtmutex.c wakes up the task
+ * _before_ it sets pi_blocked_on to NULL under
+ * tsk->pi_lock. So we need to check for both: state
+ * and pi_blocked_on.
+ */
+ raw_spin_lock(&tsk->pi_lock);
+ if (tsk->pi_blocked_on || tsk->state == TASK_RUNNING) {
+ /* Clear all bits pending in that task */
+ *pending &= ~(tsk->softirqs_raised);
+ ret = true;
+ }
+ raw_spin_unlock(&tsk->pi_lock);
+
+ return ret;
+}
+
/*
* On preempt-rt a softirq running context might be blocked on a
* lock. There might be no other runnable task on this CPU because the
@@ -103,6 +128,7 @@ static inline void softirq_clr_runner(unsigned int sirq)
*/
void softirq_check_pending_idle(void)
{
+ struct task_struct *tsk;
static int rate_limit;
struct softirq_runner *sr = &__get_cpu_var(softirq_runners);
u32 warnpending;
@@ -112,24 +138,23 @@ void softirq_check_pending_idle(void)
return;

warnpending = local_softirq_pending() & SOFTIRQ_STOP_IDLE_MASK;
+ if (!warnpending)
+ return;
for (i = 0; i < NR_SOFTIRQS; i++) {
- struct task_struct *tsk = sr->runner[i];
+ tsk = sr->runner[i];

- /*
- * The wakeup code in rtmutex.c wakes up the task
- * _before_ it sets pi_blocked_on to NULL under
- * tsk->pi_lock. So we need to check for both: state
- * and pi_blocked_on.
- */
- if (tsk) {
- raw_spin_lock(&tsk->pi_lock);
- if (tsk->pi_blocked_on || tsk->state == TASK_RUNNING) {
- /* Clear all bits pending in that task */
- warnpending &= ~(tsk->softirqs_raised);
- warnpending &= ~(1 << i);
- }
- raw_spin_unlock(&tsk->pi_lock);
- }
+ if (softirq_check_runner_tsk(tsk, &warnpending))
+ warnpending &= ~(1 << i);
+ }
+
+ if (warnpending) {
+ tsk = __this_cpu_read(ksoftirqd);
+ softirq_check_runner_tsk(tsk, &warnpending);
+ }
+
+ if (warnpending) {
+ tsk = __this_cpu_read(ktimer_softirqd);
+ softirq_check_runner_tsk(tsk, &warnpending);
}

if (warnpending) {
--
2.14.1
\
 
 \ /
  Last update: 2019-05-02 21:17    [W:0.190 / U:0.044 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site