lkml.org 
[lkml]   [2017]   [Apr]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH 1/2] sched/fair: Fix how load gets propagated from cfs_rq to its sched_entity
    Le Tuesday 25 Apr 2017 à 11:12:19 (-0700), Tejun Heo a écrit :
    > Hello,
    >
    > On Tue, Apr 25, 2017 at 10:35:53AM +0200, Vincent Guittot wrote:
    > > not sure to catch your example:
    > > a task TA with a load_avg = 1 is the only task in a task group GB so
    > > the cfs_rq load_avg = 1 too and the group_entity of this cfs_rq has
    > > got a weight of 1024 (I use 10bits format for readability) which is
    > > the total share of task group GB
    >
    > The group_entity (the sched_entity corresponding to the cfs_rq) should
    > behave as if it's a task which has the weight of 1024.
    >
    > > Are you saying that the group_entity load_avg should be around 1024 and not 1 ?
    >
    > Yes.
    >
    > > I would say it depends of TA weight. I assume that TA weight is the
    > > default value (1024) as you don't specify any value in your example
    >
    > Please consider the following configuration, where GA is a group
    > entity, and TA and TB are tasks.
    >
    > ROOT - GA (weight 1024) - TA (weight 1)
    > \ GB (weight 1 ) - TB (weight 1)
    >
    > Let's say both TA and TB are running full-tilt. Now let's take out GA
    > and GB.
    >
    > ROOT - TA1 (weight 1024)
    > \ TB1 (weight 1 )
    >
    > GA should behave the same as TA1 and GB TB1. GA's load should match
    > TA1's, and GA's load when seen from ROOT's cfs_rq has nothing to do
    > with how much total absolute weight it has inside it.
    >
    > ROOT - GA2 (weight 1024) - TA2 (weight 1 )
    > \ GB2 (weight 1 ) - TB2 (weight 1024)
    >
    > If TA2 and TB2 are constantly running, GA2 and GB2's in ROOT's cfs_rq
    > should match GA and GB's, respectively.

    Yes I agree

    >
    > > If TA directly runs at parent level, its sched_entity would have a
    > > load_avg of 1 so why the group entity load_avg should be 1024 ? it
    >
    > Because then the hierarchical weight configuration doesn't mean
    > anything.
    >
    > > will just temporally show the cfs_rq more loaded than it is really and
    > > at the end the group entity load_avg will go back to 1
    >
    > It's not temporary. The weight of a group is its shares, which is its
    > load fraction of the configured weight of the group. Assuming UP, if
    > you configure a group to the weight of 1024 and have any task running
    > full-tilt in it, the group will converge to the load of 1024. The
    > problem is that the propagation logic is currently doing something
    > completely different and temporarily push down the load whenever it
    > triggers.

    Ok, I see your point and agree that there is an issue when propagating
    load_avg of a task group which has tasks with lower weight than the share
    but your proposal has got issue because it uses runnable_load_avg instead
    of load_avg and this makes propagation of loadavg_avg incorrect, something
    like below which keeps using load_avg solve the problem

    + if (gcfs_rq->load.weight) {
    + long shares = scale_load_down(calc_cfs_shares(gcfs_rq, gcfs_rq->tg));
    +
    + load = min(gcfs_rq->avg.load_avg *
    + shares / scale_load_down(gcfs_rq->load.weight), shares);

    I have run schbench with the change above on v4.11-rc8 and latency are ok

    Thanks
    Vincent
    >
    >
    > Thanks.
    >
    > --
    > tejun

    \
     
     \ /
      Last update: 2017-04-26 18:52    [W:2.964 / U:0.460 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site