lkml.org 
[lkml]   [2021]   [Feb]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH 1/8] sched/fair: Clean up active balance nr_balance_failed trickery
On Thu, 28 Jan 2021 at 19:32, Valentin Schneider
<valentin.schneider@arm.com> wrote:
>
> When triggering an active load balance, sd->nr_balance_failed is set to
> such a value that any further can_migrate_task() using said sd will ignore
> the output of task_hot().
>
> This behaviour makes sense, as active load balance intentionally preempts a
> rq's running task to migrate it right away, but this asynchronous write is
> a bit shoddy, as the stopper thread might run active_load_balance_cpu_stop
> before the sd->nr_balance_failed write either becomes visible to the
> stopper's CPU or even happens on the CPU that appended the stopper work.
>
> Add a struct lb_env flag to denote active balancing, and use it in
> can_migrate_task(). Remove the sd->nr_balance_failed write that served the
> same purpose.
>
> Signed-off-by: Valentin Schneider <valentin.schneider@arm.com>
> ---
> kernel/sched/fair.c | 17 ++++++++++-------
> 1 file changed, 10 insertions(+), 7 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 197a51473e0c..0f6a4e58ce3c 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -7423,6 +7423,7 @@ enum migration_type {
> #define LBF_SOME_PINNED 0x08
> #define LBF_NOHZ_STATS 0x10
> #define LBF_NOHZ_AGAIN 0x20
> +#define LBF_ACTIVE_LB 0x40
>
> struct lb_env {
> struct sched_domain *sd;
> @@ -7608,10 +7609,14 @@ int can_migrate_task(struct task_struct *p, struct lb_env *env)
>
> /*
> * Aggressive migration if:
> - * 1) destination numa is preferred
> - * 2) task is cache cold, or
> - * 3) too many balance attempts have failed.
> + * 1) active balance
> + * 2) destination numa is preferred
> + * 3) task is cache cold, or
> + * 4) too many balance attempts have failed.
> */
> + if (env->flags & LBF_ACTIVE_LB)
> + return 1;
> +

This changes the behavior for numa system because it skips
migrate_degrades_locality() which can return 1 and prevent active
migration whatever nr_balance_failed

Is that intentional ?

> tsk_cache_hot = migrate_degrades_locality(p, env);
> if (tsk_cache_hot == -1)
> tsk_cache_hot = task_hot(p, env);
> @@ -9805,9 +9810,6 @@ static int load_balance(int this_cpu, struct rq *this_rq,
> active_load_balance_cpu_stop, busiest,
> &busiest->active_balance_work);
> }
> -
> - /* We've kicked active balancing, force task migration. */
> - sd->nr_balance_failed = sd->cache_nice_tries+1;
> }
> } else {
> sd->nr_balance_failed = 0;
> @@ -9963,7 +9965,8 @@ static int active_load_balance_cpu_stop(void *data)
> * @dst_grpmask we need to make that test go away with lying
> * about DST_PINNED.
> */
> - .flags = LBF_DST_PINNED,
> + .flags = LBF_DST_PINNED |
> + LBF_ACTIVE_LB,
> };
>
> schedstat_inc(sd->alb_count);
> --
> 2.27.0
>

\
 
 \ /
  Last update: 2021-02-05 14:57    [W:0.320 / U:5.316 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site