lkml.org 
[lkml]   [2014]   [Oct]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RESEND PATCH 2/3 v5] sched: Rewrite per entity runnable load average tracking
On Fri, Oct 10, 2014 at 10:21:56AM +0800, Yuyang Du wrote:
> /*
> + * Updating tg's load_avg is necessary before update_cfs_share (which is done)
> + * and effective_load (which is not done because it is too costly).
> */
> +static inline void update_tg_load_avg(struct cfs_rq *cfs_rq)
> {
> + long delta = cfs_rq->avg.load_avg - cfs_rq->tg_load_avg_contrib;
>
> + if (abs(delta) > cfs_rq->tg_load_avg_contrib / 64) {
> + atomic_long_add(delta, &cfs_rq->tg->load_avg);
> + cfs_rq->tg_load_avg_contrib = cfs_rq->avg.load_avg;
> }
> }

In the thread here: lkml.kernel.org/r/1409094682.29189.23.camel@j-VirtualBox
there are concerns about the error bounds of such constructs. We can
basically 'leak' nr_cpus * threshold, which is potentially a very large
number.

Do we want to introduce the force updated to combat this?



\
 
 \ /
  Last update: 2014-10-21 17:21    [W:0.126 / U:0.188 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site