Messages in this thread | | | Date | Mon, 06 May 2013 15:49:48 +0800 | From | Michael Wang <> | Subject | Re: [PATCH v5 7/7] sched: consider runnable load average in effective_load |
| |
On 05/06/2013 01:39 PM, Alex Shi wrote: [snip]
Rough test done:
> > 1, change back the tg_weight in calc_tg_weight() to use tg_load_contrib not direct load.
This way stop the regression of patch 7.
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index 6f4f14b..c770f8d 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -1037,8 +1037,8 @@ static inline long calc_tg_weight(struct task_group *tg, struct cfs_rq *cfs_rq) > * update_cfs_rq_load_contribution(). > */ > tg_weight = atomic64_read(&tg->load_avg); > - tg_weight -= cfs_rq->tg_load_contrib; > - tg_weight += cfs_rq->load.weight; > + //tg_weight -= cfs_rq->tg_load_contrib; > + //tg_weight += cfs_rq->load.weight; > > return tg_weight; > } > > 2, another try is follow the current calc_tg_weight, so remove the follow change.
This way show even better results than only patch 1~6.
But the way Preeti suggested doesn't works...
May be we should record some explanation about this change here, do we?
Regards, Michael Wang
> >>>> @@ -3045,7 +3045,7 @@ static long effective_load(struct task_group *tg, int cpu, long wl, long wg) >>>> /* >>>> * w = rw_i + @wl >>>> */ >>>> - w = se->my_q->load.weight + wl; >>>> + w = se->my_q->tg_load_contrib + wl; > > Would you like to try them? > >
| |