Messages in this thread | | | From | bsegall@google ... | Subject | Re: [PATCH v2 3/3] sched/fair: Disable tg load_avg update for root_task_group | Date | Wed, 02 Dec 2015 11:55:44 -0800 |
| |
Waiman Long <Waiman.Long@hpe.com> writes:
> Currently, the update_tg_load_avg() function attempts to update the > tg's load_avg value whenever the load changes even for root_task_group > where the load_avg value will never be used. This patch will disable > the load_avg update when the given task group is the root_task_group. > > Running a Java benchmark with noautogroup and a 4.3 kernel on a > 16-socket IvyBridge-EX system, the amount of CPU time (as reported by > perf) consumed by task_tick_fair() which includes update_tg_load_avg() > decreased from 0.71% to 0.22%, a more than 3X reduction. The Max-jOPs > results also increased slightly from 983015 to 986449. > > Signed-off-by: Waiman Long <Waiman.Long@hpe.com>
Reviewed-by: Ben Segall <bsegall@google.com> > --- > kernel/sched/fair.c | 6 ++++++ > 1 files changed, 6 insertions(+), 0 deletions(-) > > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c > index 8f1eccc..4607cb7 100644 > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -2670,6 +2670,12 @@ static inline void update_tg_load_avg(struct cfs_rq *cfs_rq, int force) > { > long delta = cfs_rq->avg.load_avg - cfs_rq->tg_load_avg_contrib; > > + /* > + * No need to update load_avg for root_task_group as it is not used. > + */ > + if (cfs_rq->tg == &root_task_group) > + return; > + > if (force || abs(delta) > cfs_rq->tg_load_avg_contrib / 64) { > atomic_long_add(delta, &cfs_rq->tg->load_avg); > cfs_rq->tg_load_avg_contrib = cfs_rq->avg.load_avg;
| |