lkml.org 
[lkml]   [2020]   [Jan]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [Patch v8 4/7] sched/fair: Enable periodic update of average thermal pressure
From
Date
On 01/16/2020 10:15 AM, Peter Zijlstra wrote:
> On Tue, Jan 14, 2020 at 02:57:36PM -0500, Thara Gopinath wrote:
>> Introduce support in CFS periodic tick and other bookkeeping apis
>> to trigger the process of computing average thermal pressure for a
>> cpu. Also consider avg_thermal.load_avg in others_have_blocked
>> which allows for decay of pelt signals.
>>
>> Signed-off-by: Thara Gopinath <thara.gopinath@linaro.org>
>> ---
>> kernel/sched/fair.c | 8 ++++++++
>> 1 file changed, 8 insertions(+)
>>
>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>> index 8da0222..311bb0b 100644
>> --- a/kernel/sched/fair.c
>> +++ b/kernel/sched/fair.c
>> @@ -7470,6 +7470,9 @@ static inline bool others_have_blocked(struct rq *rq)
>> if (READ_ONCE(rq->avg_dl.util_avg))
>> return true;
>>
>> + if (READ_ONCE(rq->avg_thermal.load_avg))
>> + return true;
>> +
>
> Given that struct sched_avg is 1 cacheline, the above is a pointless
> guaranteed cacheline miss if the arch doesn't
> CONFIG_HAVE_SCHED_THERMAL_PRESSURE.
Thanks for the review Peter. I see your suggestion in Patch 1 to fix
this issue. Will send out the next version implementing it.

>
>> #ifdef CONFIG_HAVE_SCHED_AVG_IRQ
>> if (READ_ONCE(rq->avg_irq.util_avg))
>> return true;
>> @@ -7495,6 +7498,7 @@ static bool __update_blocked_others(struct rq *rq, bool *done)
>> {
>> const struct sched_class *curr_class;
>> u64 now = rq_clock_pelt(rq);
>> + unsigned long thermal_pressure = arch_cpu_thermal_pressure(cpu_of(rq));
>> bool decayed;
>>
>> /*
>> @@ -7505,6 +7509,8 @@ static bool __update_blocked_others(struct rq *rq, bool *done)
>>
>> decayed = update_rt_rq_load_avg(now, rq, curr_class == &rt_sched_class) |
>> update_dl_rq_load_avg(now, rq, curr_class == &dl_sched_class) |
>> + update_thermal_load_avg(rq_clock_task(rq), rq,
>> + thermal_pressure) |
>> update_irq_load_avg(rq, 0);
>>
>> if (others_have_blocked(rq))
>
> That there indentation trainwreck is a reason to rename the function.
>
> decayed = update_rt_rq_load_avg(now, rq, curr_class == &rt_sched_class) |
> update_dl_rq_load_avg(now, rq, curr_class == &dl_sched_class) |
> update_thermal_load_avg(rq_clock_task(rq), rq, thermal_pressure) |
> update_irq_load_avg(rq, 0);
>
> Is much better.

Did you intend to say here to rename update_thermal_load_avg to
something else ?
>
> But now that you made me look at that, I noticed it's using a different
> clock -- it is _NOT_ using now/rq_clock_pelt(), which means it'll not be
> in sync with the other averages.
>
> Is there a good reason for that?

So I guess as Vincent has replied in his email, rq_clock_pelt adjusts
clock for frequency and cpu capacity invariance. Thermal pressure signal
is already accounted for it when updated from the thermal framework.
>
>> @@ -10275,6 +10281,7 @@ static void task_tick_fair(struct rq *rq, struct task_struct *curr, int queued)
>> {
>> struct cfs_rq *cfs_rq;
>> struct sched_entity *se = &curr->se;
>> + unsigned long thermal_pressure = arch_cpu_thermal_pressure(cpu_of(rq));
>>
>> for_each_sched_entity(se) {
>> cfs_rq = cfs_rq_of(se);
>> @@ -10286,6 +10293,7 @@ static void task_tick_fair(struct rq *rq, struct task_struct *curr, int queued)
>>
>> update_misfit_status(curr, rq);
>> update_overutilized_status(task_rq(curr));
>> + update_thermal_load_avg(rq_clock_task(rq), rq, thermal_pressure);
>> }
>
> I'm thinking this is the wrong place; should this not be in
> scheduler_tick(), right before calling sched_class::task_tick() ? Surely
> any execution will affect thermals, not only fair class execution.

I read all other comments from others on this. I agree. I will move this
to scheduler_tick.

>


--
Warm Regards
Thara

\
 
 \ /
  Last update: 2020-01-17 21:20    [W:1.034 / U:0.076 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site