| Date | Fri, 30 Jul 2010 10:51:46 -0700 | From | Greg KH <> | Subject | [089/205] sched: Fix over-scheduling bug |
| |
2.6.34-stable review patch. If anyone has any objections, please let us know.
------------------
From: Alex,Shi <alex.shi@intel.com>
commit 3c93717cfa51316e4dbb471e7c0f9d243359d5f8 upstream.
Commit e70971591 ("sched: Optimize unused cgroup configuration") introduced an imbalanced scheduling bug.
If we do not use CGROUP, function update_h_load won't update h_load. When the system has a large number of tasks far more than logical CPU number, the incorrect cfs_rq[cpu]->h_load value will cause load_balance() to pull too many tasks to the local CPU from the busiest CPU. So the busiest CPU keeps going in a round robin. That will hurt performance.
The issue was found originally by a scientific calculation workload that developed by Yanmin. With that commit, the workload performance drops about 40%.
CPU before after
00 : 2 : 7 01 : 1 : 7 02 : 11 : 6 03 : 12 : 7 04 : 6 : 6 05 : 11 : 7 06 : 10 : 6 07 : 12 : 7 08 : 11 : 6 09 : 12 : 6 10 : 1 : 6 11 : 1 : 6 12 : 6 : 6 13 : 2 : 6 14 : 2 : 6 15 : 1 : 6
Reviewed-by: Yanmin zhang <yanmin.zhang@intel.com> Signed-off-by: Alex Shi <alex.shi@intel.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <1276754893.9452.5442.camel@debian> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
--- kernel/sched.c | 3 --- 1 file changed, 3 deletions(-)
--- a/kernel/sched.c +++ b/kernel/sched.c @@ -1675,9 +1675,6 @@ static void update_shares(struct sched_d static void update_h_load(long cpu) { - if (root_task_group_empty()) - return; - walk_tg_tree(tg_load_down, tg_nop, (void *)cpu); }
|