lkml.org 
[lkml]   [2009]   [Mar]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: scheduler oddity [bug?]
From
Date
On Sun, 2009-03-08 at 16:39 +0100, Ingo Molnar wrote:
> * Mike Galbraith <efault@gmx.de> wrote:
>
> > The problem with your particular testcase is that while one
> > half has an avg_overlap (what we use as affinity hint for
> > synchronous wakeups) which triggers the affinity hint, the
> > other half has avg_overlap of zero, what it was born with, so
> > despite significant execution overlap, the scheduler treats
> > them as if they were truly synchronous tasks.
>
> hm, why does it stay on zero?

Wakeup preemption. Presuming here: heavy task wakes light task, is
preempted, light task stuffs data into pipe, heavy task doesn't block,
so no avg_overlap is ever computed. The heavy task uses 100% CPU.

Running as SCHED_BATCH (virgin source), it becomes sane.

pipetest (6836, #threads: 1)
---------------------------------------------------------
se.exec_start : 266073.001296
se.vruntime : 173620.953443
se.sum_exec_runtime : 11324.486321
se.avg_overlap : 1.306762
nr_switches : 381
nr_voluntary_switches : 2
nr_involuntary_switches : 379
se.load.weight : 1024
policy : 3
prio : 120
clock-delta : 109

pipetest (6837, #threads: 1)
---------------------------------------------------------
se.exec_start : 266066.098182
se.vruntime : 51893.050177
se.sum_exec_runtime : 2367.077751
se.avg_overlap : 0.077492
nr_switches : 897
nr_voluntary_switches : 828
nr_involuntary_switches : 69
se.load.weight : 1024
policy : 3
prio : 120
clock-delta : 109

> > static void dequeue_task(struct rq *rq, struct task_struct *p, int sleep)
> > {
> > + u64 limit = sysctl_sched_migration_cost;
> > + u64 runtime = p->se.sum_exec_runtime - p->se.prev_sum_exec_runtime;
> > +
> > if (sleep && p->se.last_wakeup) {
> > update_avg(&p->se.avg_overlap,
> > p->se.sum_exec_runtime - p->se.last_wakeup);
> > p->se.last_wakeup = 0;
> > - }
> > + } else if (p->se.avg_overlap < limit && runtime >= limit)
> > + update_avg(&p->se.avg_overlap, runtime);
> >
> > sched_info_dequeued(p);
> > p->sched_class->dequeue_task(rq, p, sleep);
>
> hm, that's weird. We want to limit avg_overlap maintenance to
> true sleeps only.

Except that when we stop sleeping, we're left with a stale avg_overlap.

> And this patch only makes a difference in the !sleep case -
> which shouldnt be that common in this workload.

Hack was only to kill the stale zero. Let's forget hack ;-)

-Mike



\
 
 \ /
  Last update: 2009-03-08 17:23    [W:0.096 / U:1.292 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site