lkml.org 
[lkml]   [2020]   [Feb]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v2 2/5] sched/numa: Replace runnable_load_avg by load_avg
From
Date
On 14/02/2020 16:27, Vincent Guittot wrote:

[...]

> /*
> * The load is corrected for the CPU capacity available on each node.
> *
> @@ -1788,10 +1831,10 @@ static int task_numa_migrate(struct task_struct *p)
> dist = env.dist = node_distance(env.src_nid, env.dst_nid);
> taskweight = task_weight(p, env.src_nid, dist);
> groupweight = group_weight(p, env.src_nid, dist);
> - update_numa_stats(&env.src_stats, env.src_nid);
> + update_numa_stats(&env, &env.src_stats, env.src_nid);

This looks strange. Can you do:

-static void update_numa_stats(struct task_numa_env *env,
+static void update_numa_stats(unsigned int imbalance_pct,
struct numa_stats *ns, int nid)

- update_numa_stats(&env, &env.src_stats, env.src_nid);
+ update_numa_stats(env.imbalance_pct, &env.src_stats, env.src_nid);

[...]

> +static unsigned long cpu_runnable_load(struct rq *rq)
> +{
> + return cfs_rq_runnable_load_avg(&rq->cfs);
> +}
> +

Why not remove cpu_runnable_load() in this patch rather moving it?

kernel/sched/fair.c:5492:22: warning: ‘cpu_runnable_load’ defined but
not used [-Wunused-function]
static unsigned long cpu_runnable_load(struct rq *rq)


\
 
 \ /
  Last update: 2020-02-18 13:38    [W:0.197 / U:0.220 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site