lkml.org 
[lkml]   [2020]   [Feb]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: 5.6-rc3: WARNING: CPU: 48 PID: 17435 at kernel/sched/fair.c:380 enqueue_task_fair+0x328/0x440
On 02/28/20 16:42, Christian Borntraeger wrote:
>
>
> On 28.02.20 16:37, Vincent Guittot wrote:
> > On Fri, 28 Feb 2020 at 16:08, Christian Borntraeger
> > <borntraeger@de.ibm.com> wrote:
> >>
> >> Also happened with 5.4:
> >> Seems that I just happen to have an interesting test workload/system size interaction
> >> on a newly installed system that triggers this.
> >
> > you will probably go back to 5.1 which is the version where we put
> > back the deletion of unused cfs_rq from the list which can trigger the
> > warning:
> > commit 039ae8bcf7a5 : (Fix O(nr_cgroups) in the load balancing path)
> >
> > AFAICT, we haven't changed this since
>
> So you do know what is the problem? If not is there any debug option or
> patch that I could apply to give you more information?
>

It might be a long shot as I'm not particularly knowledgeable about this code
path, but could we be missing rcu_read_lock/unlock around the call to
unthrottle_cfs_rq() here?

---

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index fc1dfc007604..56aa5cfbb7f1 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -7434,6 +7434,7 @@ static int tg_set_cfs_bandwidth(struct task_group *tg, u64 period, u64 quota)

raw_spin_unlock_irq(&cfs_b->lock);

+ rcu_read_lock();
for_each_online_cpu(i) {
struct cfs_rq *cfs_rq = tg->cfs_rq[i];
struct rq *rq = cfs_rq->rq;
@@ -7447,6 +7448,7 @@ static int tg_set_cfs_bandwidth(struct task_group *tg, u64 period, u64 quota)
unthrottle_cfs_rq(cfs_rq);
rq_unlock_irq(rq, &rf);
}
+ rcu_read_unlock();
if (runtime_was_enabled && !runtime_enabled)
cfs_bandwidth_usage_dec();
out_unlock:
\
 
 \ /
  Last update: 2020-02-28 17:33    [W:0.095 / U:0.036 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site