lkml.org 
[lkml]   [2013]   [Jul]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
SubjectRe: [V2 2/2] sched: update cfs_rq weight earlier in enqueue_entity
From
Paul,

On Mon, Jul 1, 2013 at 10:07 PM, Paul Turner <pjt@google.com> wrote:
> Could you please restate the below?
>
> On Mon, Jul 1, 2013 at 5:33 AM, Lei Wen <leiwen@marvell.com> wrote:
>> Since we are going to calculate cfs_rq's average ratio by
>> runnable_load_avg/load.weight
>
> I don't understand what you mean by this.

Previously I take runnable_load_avg/load.weight calculation as the cfs_rq's
average ratio. But as Alex point out, the runnable_avg_sum/runnable_avg_period
may better sever this need.

>
>>, if not increase the load.weight prior to
>> enqueue_entity_load_avg, it may lead to one cfs_rq's avg ratio higher
>> than 100%.
>>
>
> Or this.

In my mind, runnable_load_avg in one cfs_rq should always be less than
load.weight.
Not sure whether this assumption stand here, but runnable_load_avg/load.weight
truly could shows out the cfs_rq execution trend in some aspect.

The previous problem that enqueue_entity_load_avg called before
account_entity_enqueue,
which make runnable_load_avg be updated first, then the load.weight.
So that with the trace info log inside of enqueue_entity_load_avg, we
may get the calculation
result for runnable_load_avg/load.weight > 1.
This result is not friendly for the final data being parsed out.


>
>> Adjust the sequence, so that all ratio is kept below 100%.
>>
>> Signed-off-by: Lei Wen <leiwen@marvell.com>
>> ---
>> kernel/sched/fair.c | 2 +-
>> 1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index 07bd74c..d1eee84 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -1788,8 +1788,8 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
>> * Update run-time statistics of the 'current'.
>> */
>> update_curr(cfs_rq);
>> - enqueue_entity_load_avg(cfs_rq, se, flags & ENQUEUE_WAKEUP);
>> account_entity_enqueue(cfs_rq, se);
>> + enqueue_entity_load_avg(cfs_rq, se, flags & ENQUEUE_WAKEUP);
>
> account_entity_enqueue is independent of enqueue_entity_load_avg;
> their order should not matter.

Yes, agree, the order should not be matter, but for make trace info
integrated, we may
need some order here.

>
> Further, should we restore the reverted amortization commit (improves
> context switch times)


Not understand here...
What the "should we restore the reverted amortization commit (improves
context switch times)" means here...?


enqueue_entity_load_avg needs to precede
> account_entity_enqueue as it may update se->load.weight.

account_entity_enqueue needs to precede enqueue_entity_load_avg?

Thanks,
Lei

>
>> update_cfs_shares(cfs_rq);
>>
>> if (flags & ENQUEUE_WAKEUP) {
>> --
>> 1.7.10.4
>>
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at http://www.tux.org/lkml/


\
 
 \ /
  Last update: 2013-07-02 05:21    [W:0.073 / U:0.148 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site