lkml.org 
[lkml]   [2021]   [Apr]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH] sched/fair: Rate limit calls to update_blocked_averages() for NOHZ
From
Date


On 4/7/21 7:02 AM, Vincent Guittot wrote:
> Hi Tim,
>
> On Wed, 24 Mar 2021 at 17:05, Tim Chen <tim.c.chen@linux.intel.com> wrote:
>>
>>
>>
>> On 3/24/21 6:44 AM, Vincent Guittot wrote:
>>> Hi Tim,
>>
>>>
>>> IIUC your problem, we call update_blocked_averages() but because of:
>>>
>>> if (this_rq->avg_idle < curr_cost + sd->max_newidle_lb_cost) {
>>> update_next_balance(sd, &next_balance);
>>> break;
>>> }
>>>
>>> the for_each_domain loop stops even before running load_balance on the 1st
>>> sched domain level which means that update_blocked_averages() was called
>>> unnecessarily.
>>>
>>
>> That's right
>>
>>> And this is even more true with a small sysctl_sched_migration_cost which allows newly
>>> idle LB for very small this_rq->avg_idle. We could wonder why you set such a low value
>>> for sysctl_sched_migration_cost which is lower than the max_newidle_lb_cost of the
>>> smallest domain but that's probably because of task_hot().
>>>
>>> if avg_idle is lower than the sd->max_newidle_lb_cost of the 1st sched_domain, we should
>>> skip spin_unlock/lock and for_each_domain() loop entirely
>>>
>>> Maybe something like below:
>>>
>>
>> The patch makes sense. I'll ask our benchmark team to queue this patch for testing.
>
> Do you have feedback from your benchmark team ?
>

Vincent,

Thanks for following up. I just got some data back from the benchmark team.
The performance didn't change with your patch. And the overall cpu% of update_blocked_averages
also remain at about the same level. My first thought was perhaps this update
still didn't catch all the calls to update_blocked_averages

if (this_rq->avg_idle < sysctl_sched_migration_cost ||
- !READ_ONCE(this_rq->rd->overload)) {
+ !READ_ONCE(this_rq->rd->overload) ||
+ (sd && this_rq->avg_idle < sd->max_newidle_lb_cost)) {

To experiment, I added one more check on the next_balance to further limit
the path to actually do idle load balance with the next_balance time.

if (this_rq->avg_idle < sysctl_sched_migration_cost ||
- !READ_ONCE(this_rq->rd->overload)) {
+ time_before(jiffies, this_rq->next_balance) ||
+ !READ_ONCE(this_rq->rd->overload) ||
+ (sd && this_rq->avg_idle < sd->max_newidle_lb_cost)) {

I was suprised to find the overall cpu% consumption of update_blocked_averages
and throughput of the benchmark still didn't change much. So I took a
peek into the profile and found the update_blocked_averages calls shifted to the idle load balancer.
The call to update_locked_averages was reduced in newidle_balance so the patch did
what we intended. But the overall rate of calls to
update_blocked_averages remain roughly the same, shifting from
newidle_balance to run_rebalance_domains.

100.00% (ffffffff810cf070)
|
---update_blocked_averages
|
|--95.47%--run_rebalance_domains
| __do_softirq
| |
| |--94.27%--asm_call_irq_on_stack
| | do_softirq_own_stack
| | |
| | |--93.74%--irq_exit_rcu
| | | |
| | | |--88.20%--sysvec_apic_timer_interrupt
| | | | asm_sysvec_apic_timer_interrupt
| | | | |
...
|
|
--4.53%--newidle_balance
pick_next_task_fair

I was expecting idle load balancer to be rate limited to 60 Hz, which
should be 15 jiffies apart on the test system with CONFIG_HZ_250.
When I did a trace on a single CPU, I see that update_blocked_averages
are often called between 1 to 4 jiffies apart, which is at a much higher
rate than I expected. I haven't taken a closer look yet. But you may
have a better idea. I won't have access to the test system and workload
till probably next week.

Thanks.

Tim

\
 
 \ /
  Last update: 2021-04-07 19:20    [W:0.161 / U:0.264 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site