lkml.org 
[lkml]   [2018]   [Feb]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH 3/4] sched/fair: Do not migrate on wake_affine_weight if weights are equal
On Mon, Feb 12, 2018 at 02:58:56PM +0000, Mel Gorman wrote:
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index c1091cb023c4..28c8d9c91955 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -5747,7 +5747,16 @@ wake_affine_weight(struct sched_domain *sd, struct task_struct *p,
> prev_eff_load *= 100 + (sd->imbalance_pct - 100) / 2;
> prev_eff_load *= capacity_of(this_cpu);
>
> - return this_eff_load <= prev_eff_load ? this_cpu : nr_cpumask_bits;
> + /*
> + * If sync, adjust the weight of prev_eff_load such that if
> + * prev_eff == this_eff that select_idle_sibling will consider
> + * stacking the wakee on top of the waker if no other CPU is
> + * idle.
> + */
> + if (sync)
> + prev_eff_load += 1;

So where we had <= and would consistently favour pulling the task to the
waking CPU when all else what equal, you now switch to <, such that when
things are equal we do not pull.

That makes sense I suppose.

Except for sync wakeups, where you say, if everything else is equal,
pull, which also makes sense, because sync says 'current' promises to go
away.

OK.

> +
> + return this_eff_load < prev_eff_load ? this_cpu : nr_cpumask_bits;
> }
>
> static int wake_affine(struct sched_domain *sd, struct task_struct *p,
> --
> 2.15.1
>

\
 
 \ /
  Last update: 2018-02-12 18:31    [W:0.100 / U:0.136 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site