lkml.org 
[lkml]   [2010]   [Nov]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
Subject[patch] sched: fix unregister_fair_sched_group
In the flipping and flopping between calling unregister_fair_sched_group on a
per-cpu versus per-group basis we ended up in a bad state.

Remove from the list for the passed cpu as opposed to some arbitrary index.

(This fixes explosions w/ autogroup as well as a group creation/destruction
stress test)

Signed-off-by: Paul Turner <pjt@google.com>
---
kernel/sched.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)

Index: tip/kernel/sched.c
===================================================================
--- tip.orig/kernel/sched.c
+++ tip/kernel/sched.c
@@ -8097,7 +8097,6 @@ static inline void unregister_fair_sched
{
struct rq *rq = cpu_rq(cpu);
unsigned long flags;
- int i;

/*
* Only empty task groups can be destroyed; so we can speculatively
@@ -8107,7 +8106,7 @@ static inline void unregister_fair_sched
return;

raw_spin_lock_irqsave(&rq->lock, flags);
- list_del_leaf_cfs_rq(tg->cfs_rq[i]);
+ list_del_leaf_cfs_rq(tg->cfs_rq[cpu]);
raw_spin_unlock_irqrestore(&rq->lock, flags);
}
#else /* !CONFG_FAIR_GROUP_SCHED */



\
 
 \ /
  Last update: 2010-11-30 02:01    [W:0.049 / U:0.168 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site