Messages in this thread Patch in this message | | | From | Frederic Weisbecker <> | Subject | [PATCH 1/6] sched: Disable lb_bias feature for full dynticks | Date | Wed, 12 Jun 2013 16:02:33 +0200 |
| |
If we run in full dynticks mode, we currently have no way to correctly update the secondary decaying indexes of the CPU load stats as it is typically maintained by update_cpu_load_active() at each tick.
We have an available infrastructure that handles tickless loads (cf: decay_load_missed) but it only works for idle tickless loads, which only applies if the CPU hasn't run any real task but idle on the tickless timeslice.
Until we can provide a sane mathematical solution to handle full dynticks loads, lets simply deactivate the LB_BIAS sched feature if CONFIG_NO_HZ_FULL as it is currently the only user of the decayed load records.
The first load index that represents the current runqueue load weight is still maintained and usable.
Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Li Zhong <zhong@linux.vnet.ibm.com> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Borislav Petkov <bp@alien8.de> --- kernel/sched/fair.c | 13 +++++++++++-- kernel/sched/features.h | 3 +++ 2 files changed, 14 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c index c61a614..81b62d6 100644 --- a/kernel/sched/fair.c +++ b/kernel/sched/fair.c @@ -2922,6 +2922,15 @@ static unsigned long weighted_cpuload(const int cpu) return cpu_rq(cpu)->load.weight; } +static inline int sched_lb_bias(void) +{ +#ifndef CONFIG_NO_HZ_FULL + return sched_feat(LB_BIAS); +#else + return 0; +#endif +} + /* * Return a low guess at the load of a migration-source cpu weighted * according to the scheduling class and "nice" value. @@ -2934,7 +2943,7 @@ static unsigned long source_load(int cpu, int type) struct rq *rq = cpu_rq(cpu); unsigned long total = weighted_cpuload(cpu); - if (type == 0 || !sched_feat(LB_BIAS)) + if (type == 0 || !sched_lb_bias()) return total; return min(rq->cpu_load[type-1], total); @@ -2949,7 +2958,7 @@ static unsigned long target_load(int cpu, int type) struct rq *rq = cpu_rq(cpu); unsigned long total = weighted_cpuload(cpu); - if (type == 0 || !sched_feat(LB_BIAS)) + if (type == 0 || !sched_lb_bias()) return total; return max(rq->cpu_load[type-1], total); diff --git a/kernel/sched/features.h b/kernel/sched/features.h index 99399f8..635f902 100644 --- a/kernel/sched/features.h +++ b/kernel/sched/features.h @@ -43,7 +43,10 @@ SCHED_FEAT(ARCH_POWER, true) SCHED_FEAT(HRTICK, false) SCHED_FEAT(DOUBLE_TICK, false) + +#ifndef CONFIG_NO_HZ_FULL SCHED_FEAT(LB_BIAS, true) +#endif /* * Decrement CPU power based on time not spent running tasks -- 1.7.5.4
| |