lkml.org 
[lkml]   [2010]   [Aug]   [30]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC][PATCH 1/3] sched: Rewrite tg_shares_up
On Sun, Aug 29, 2010 at 12:30:26AM +0200, Peter Zijlstra wrote:
> By tracking a per-cpu load-avg for each cfs_rq and folding it into a
> global task_group load on each tick we can rework tg_shares_up to be
> strictly per-cpu.

So tg->load_weight is supposed to represent more or less current task load
across all cpus? I see only atomic_add() to it - which means it can only keep
growing or remain constant - IOW capturing the historical load even since
the task group was started? I was expecting it to reduce based on how a group
goes idle, otherwise

> +static void update_cfs_shares(struct cfs_rq *cfs_rq)
> +{
> + struct task_group *tg;
> + struct sched_entity *se;
> + unsigned long load_weight, load, shares;
> +
> + if (!cfs_rq)
> + return;
> +
> + tg = cfs_rq->tg;
> + se = tg->se[cpu_of(rq_of(cfs_rq))];
> + if (!se)
> + return;
> +
> + load = cfs_rq->load.weight;
> +
> + load_weight = atomic_read(&tg->load_weight);
> + load_weight -= cfs_rq->load_contribution;
> + load_weight += load;
> +
> + shares = (tg->shares * load);
> + if (load_weight)
> + shares /= load_weight;

this seem incorrect? Even though we have corrected tg->load_weight to reflect
current load on 'cfs_rq', tg->load_weight still captures historical load on
other cpus and hence could be a large #, making the division inaccurate?

Also I wonder how much of a hot spot tg->load_weight would become ..

- vatsa


\
 
 \ /
  Last update: 2010-08-30 19:31    [W:0.352 / U:0.092 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site