lkml.org 
[lkml]   [2020]   [Mar]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: [RFC PATCH] sched: fix the nonsense shares when load of cfs_rq is too, small
Date
王贇 <yun.wang@linux.alibaba.com> writes:

> On 2020/3/5 上午2:47, bsegall@google.com wrote:
> [snip]
>>> Argh, because A->cfs_rq.load.weight is B->se.load.weight which is
>>> B->shares/nr_cpus.
>>>
>>>> While the se of D on root cfs_rq is far more bigger than 2, so it
>>>> wins the battle.
>>>>
>>>> This patch add a check on the zero load and make it as MIN_SHARES
>>>> to fix the nonsense shares, after applied the group C wins as
>>>> expected.
>>>>
>>>> Signed-off-by: Michael Wang <yun.wang@linux.alibaba.com>
>>>> ---
>>>> kernel/sched/fair.c | 2 ++
>>>> 1 file changed, 2 insertions(+)
>>>>
>>>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>>>> index 84594f8aeaf8..53d705f75fa4 100644
>>>> --- a/kernel/sched/fair.c
>>>> +++ b/kernel/sched/fair.c
>>>> @@ -3182,6 +3182,8 @@ static long calc_group_shares(struct cfs_rq *cfs_rq)
>>>> tg_shares = READ_ONCE(tg->shares);
>>>>
>>>> load = max(scale_load_down(cfs_rq->load.weight), cfs_rq->avg.load_avg);
>>>> + if (!load && cfs_rq->load.weight)
>>>> + load = MIN_SHARES;
>>>>
>>>> tg_weight = atomic_long_read(&tg->load_avg);
>>>
>>> Yeah, I suppose that'll do. Hurmph, wants a comment though.
>>>
>>> But that has me looking at other users of scale_load_down(), and doesn't
>>> at least update_tg_cfs_load() suffer the same problem?
>>
>> I think instead we should probably scale_load_down(tg_shares) and
>> scale_load(load_avg). tg_shares is always a scaled integer, so just
>> moving the source of the scaling in the multiply should do the job.
>>
>> ie
>>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index fcc968669aea..6d7a9d72d742 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -3179,9 +3179,9 @@ static long calc_group_shares(struct cfs_rq *cfs_rq)
>> long tg_weight, tg_shares, load, shares;
>> struct task_group *tg = cfs_rq->tg;
>>
>> - tg_shares = READ_ONCE(tg->shares);
>> + tg_shares = scale_load_down(READ_ONCE(tg->shares));
>>
>> - load = max(scale_load_down(cfs_rq->load.weight), cfs_rq->avg.load_avg);
>> + load = max(cfs_rq->load.weight, scale_load(cfs_rq->avg.load_avg));
>>
>> tg_weight = atomic_long_read(&tg->load_avg);
>
> Get the point, but IMHO fix scale_load_down() sounds better, to
> cover all the similar cases, let's first try that way see if it's
> working :-)

Yeah, that might not be a bad idea as well; it's just that doing this
fix would keep you from losing all your precision (and I'd have to think
if that would result in fairness issues like having all the group ses
having the full tg shares, or something like that).

\
 
 \ /
  Last update: 2020-03-06 20:17    [W:1.223 / U:0.428 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site