lkml.org 
[lkml]   [2017]   [May]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Subject[PATCH RESEND 1/1] sched/rt: minimize rq->lock contention in, do_sched_rt_period_timer()
From
Date
With CONFIG_RT_GROUP_SCHED defined, do_sched_rt_period_timer() sequentially
takes each cpu's rq->lock. On a large, busy system, the cumulative time it
takes to acquire each lock can be excessive, even triggering a watchdog
timeout.

If rt_rq_rt_time and rt_rq->rt_nr_running are both zero, this function does
nothing while holding the lock, so don't bother taking it at all.

Orabug: 25491970

Signed-off-by: Dave Kleikamp <dave.kleikamp@oracle.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
---
kernel/sched/rt.c | 11 +++++++++++
1 file changed, 11 insertions(+)

diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 9f3e40226dec..ae4a8c529a02 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -840,6 +840,17 @@ static int do_sched_rt_period_timer(struct rt_bandwidth *rt_b, int overrun)
int enqueue = 0;
struct rt_rq *rt_rq = sched_rt_period_rt_rq(rt_b, i);
struct rq *rq = rq_of_rt_rq(rt_rq);
+ int skip;
+
+ /*
+ * When span == cpu_online_mask, taking each rq->lock
+ * can be time-consuming. Try to avoid it when possible.
+ */
+ raw_spin_lock(&rt_rq->rt_runtime_lock);
+ skip = !rt_rq->rt_time && !rt_rq->rt_nr_running;
+ raw_spin_unlock(&rt_rq->rt_runtime_lock);
+ if (skip)
+ continue;

raw_spin_lock(&rq->lock);
if (rt_rq->rt_time) {
--
2.12.2
\
 
 \ /
  Last update: 2017-05-15 21:16    [W:0.090 / U:0.592 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site