lkml.org 
[lkml]   [2017]   [Apr]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH 2/3] rtmutex: deboost priority conditionally when rt-mutex unlock
Date
The rt_mutex_fastunlock() will deboost 'current' task when it should be.
but the rt_mutex_slowunlock() function will set the 'deboost' flag
unconditionally. That cause some unnecessary priority adjustment.

'current' release this lock, so 'current' should be a higher prio
task than the next top waiter, unless the current prio was gotten
from this top waiter, iff so, we need to deboost 'current' after
the lock release.

Signed-off-by: Alex Shi <alex.shi@linaro.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Sebastian Siewior <bigeasy@linutronix.de>
To: linux-kernel@vger.kernel.org
To: Ingo Molnar <mingo@redhat.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
---
kernel/locking/rtmutex.c | 19 ++++++++++++++++---
1 file changed, 16 insertions(+), 3 deletions(-)

diff --git a/kernel/locking/rtmutex.c b/kernel/locking/rtmutex.c
index 6edc32e..05ff685 100644
--- a/kernel/locking/rtmutex.c
+++ b/kernel/locking/rtmutex.c
@@ -1037,10 +1037,11 @@ static int task_blocks_on_rt_mutex(struct rt_mutex *lock,
*
* Called with lock->wait_lock held and interrupts disabled.
*/
-static void mark_wakeup_next_waiter(struct wake_q_head *wake_q,
+static bool mark_wakeup_next_waiter(struct wake_q_head *wake_q,
struct rt_mutex *lock)
{
struct rt_mutex_waiter *waiter;
+ bool deboost = false;

raw_spin_lock(&current->pi_lock);

@@ -1055,6 +1056,15 @@ static void mark_wakeup_next_waiter(struct wake_q_head *wake_q,
rt_mutex_dequeue_pi(current, waiter);

/*
+ * 'current' release this lock, so 'current' should be a higher prio
+ * task than the next top waiter, unless the current prio was gotten
+ * from this top waiter, iff so, we need to deboost 'current' after
+ * the lock release.
+ */
+ if (current->prio == waiter->prio)
+ deboost = true;
+
+ /*
* As we are waking up the top waiter, and the waiter stays
* queued on the lock until it gets the lock, this lock
* obviously has waiters. Just set the bit here and this has
@@ -1067,6 +1077,8 @@ static void mark_wakeup_next_waiter(struct wake_q_head *wake_q,
raw_spin_unlock(&current->pi_lock);

wake_q_add(wake_q, waiter->task);
+
+ return deboost;
}

/*
@@ -1336,6 +1348,7 @@ static bool __sched rt_mutex_slowunlock(struct rt_mutex *lock,
struct wake_q_head *wake_q)
{
unsigned long flags;
+ bool deboost = false;

/* irqsave required to support early boot calls */
raw_spin_lock_irqsave(&lock->wait_lock, flags);
@@ -1389,12 +1402,12 @@ static bool __sched rt_mutex_slowunlock(struct rt_mutex *lock,
*
* Queue the next waiter for wakeup once we release the wait_lock.
*/
- mark_wakeup_next_waiter(wake_q, lock);
+ deboost = mark_wakeup_next_waiter(wake_q, lock);

raw_spin_unlock_irqrestore(&lock->wait_lock, flags);

/* check PI boosting */
- return true;
+ return deboost;
}

/*
--
1.9.1
\
 
 \ /
  Last update: 2017-04-13 16:05    [W:0.087 / U:0.112 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site