Messages in this thread Patch in this message | | | From | Frederic Weisbecker <> | Subject | [PATCH 18/24] sched: Update nohz rq clock before searching busiest group on load balancing | Date | Thu, 20 Dec 2012 19:33:05 +0100 |
| |
While load balancing an rq target, we look for the busiest group. This operation may require an uptodate rq clock if we end up calling scale_rt_power(). To this end, update it manually if the target is running tickless.
DOUBT: don't we actually also need this in vanilla kernel, in case this_cpu is in dyntick-idle mode?
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Cc: Alessio Igor Bogani <abogani@kernel.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Avi Kivity <avi@redhat.com> Cc: Chris Metcalf <cmetcalf@tilera.com> Cc: Christoph Lameter <cl@linux.com> Cc: Geoff Levand <geoff@infradead.org> Cc: Gilad Ben Yossef <gilad@benyossef.com> Cc: Hakan Akkan <hakanakkan@gmail.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Thomas Gleixner <tglx@linutronix.de> --- kernel/sched/fair.c | 13 +++++++++++++ 1 files changed, 13 insertions(+), 0 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index 291e225..b1b791d 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -4795,6 +4795,19 @@ static int load_balance(int this_cpu, struct rq *this_rq, schedstat_inc(sd, lb_count[idle]); + /* + * find_busiest_group() may need an uptodate cpu clock + * for find_busiest_group() (see scale_rt_power()). If + * the CPU is nohz, it's clock may be stale. + */ + if (tick_nohz_full_cpu(this_cpu)) { + local_irq_save(flags); + raw_spin_lock(&this_rq->lock); + update_rq_clock(this_rq); + raw_spin_unlock(&this_rq->lock); + local_irq_restore(flags); + } + redo: group = find_busiest_group(&env, balance); -- 1.7.5.4
| |