lkml.org 
[lkml]   [2014]   [Jul]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 2/2 v4] sched: Rewrite per entity runnable load average tracking
On Fri, Jul 18, 2014 at 07:26:06AM +0800, Yuyang Du wrote:

> -static inline void __update_tg_runnable_avg(struct sched_avg *sa,
> - struct cfs_rq *cfs_rq)
> -{
> - struct task_group *tg = cfs_rq->tg;
> - long contrib;
> -
> - /* The fraction of a cpu used by this cfs_rq */
> - contrib = div_u64((u64)sa->runnable_avg_sum << NICE_0_SHIFT,
> - sa->runnable_avg_period + 1);
> - contrib -= cfs_rq->tg_runnable_contrib;
> -
> - if (abs(contrib) > cfs_rq->tg_runnable_contrib / 64) {
> - atomic_add(contrib, &tg->runnable_avg);
> - cfs_rq->tg_runnable_contrib += contrib;
> - }
> -}


> -static inline void __update_group_entity_contrib(struct sched_entity *se)
> +static inline void update_tg_load_avg(struct cfs_rq *cfs_rq)
> {
> + long delta = cfs_rq->avg.load_avg - cfs_rq->tg_load_avg_contrib;
>
> + if (delta) {
> + atomic_long_add(delta, &cfs_rq->tg->load_avg);
> + cfs_rq->tg_load_avg_contrib = cfs_rq->avg.load_avg;
> }
> }

We talked about this before, you made that an unconditional atomic op on
an already hot line.

You need some words on why this isn't a problem. Either in a comment or
in the Changelog. You cannot leave such changes undocumented.

> +#define subtract_until_zero(minuend, subtrahend) \
> + (subtrahend < minuend ? minuend - subtrahend : 0)

WTH is a minuend or subtrahend? Are you a wordsmith in your spare time
and like to make up your own words?

Also, isn't writing: x = max(0, x-y), far more readable to begin with?

> +/*
> + * Group cfs_rq's load_avg is used for task_h_load and update_cfs_share
> + * calc.
> + */
> +static inline int update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq)
> {
> + int decayed;
>
> + if (atomic_long_read(&cfs_rq->removed_load_avg)) {
> + long r = atomic_long_xchg(&cfs_rq->removed_load_avg, 0);
> + cfs_rq->avg.load_avg = subtract_until_zero(cfs_rq->avg.load_avg, r);
> + r *= LOAD_AVG_MAX;
> + cfs_rq->avg.load_sum = subtract_until_zero(cfs_rq->avg.load_sum, r);
> }
>
> + decayed = __update_load_avg(now, &cfs_rq->avg, cfs_rq->load.weight);
>
> +#ifndef CONFIG_64BIT
> + if (cfs_rq->avg.last_update_time != cfs_rq->load_last_update_time_copy) {
> + smp_wmb();
> + cfs_rq->load_last_update_time_copy = cfs_rq->avg.last_update_time;
> + }
> +#endif
>
> + return decayed;
> +}

Its a bit unfortunate that we update the copy in another function than
the original, but I think I see why you did that. But is it at all
likely that we do not need to update? That is, does that compare make
any sense?


[unhandled content-type:application/pgp-signature]
\
 
 \ /
  Last update: 2014-07-28 14:21    [W:0.864 / U:0.032 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site