lkml.org 
[lkml]   [2018]   [Mar]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [RESEND PATCH] sched/fair: consider RT/IRQ pressure in select_idle_sibling
From
Date
Hi Peter,

On 02/09/2018 04:53 AM, Peter Zijlstra wrote:

<snip>

>> this_sd = rcu_dereference(*this_cpu_ptr(&sd_llc));
>> if (!this_sd)
>> @@ -6173,8 +6183,15 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, int t
>> return -1;
>> if (!cpumask_test_cpu(cpu, &p->cpus_allowed))
>> continue;
>> + if (idle_cpu(cpu)) {
>> + if (full_capacity(cpu)) {
>> + best_cpu = cpu;
>> + break;
>> + } else if (capacity_of(cpu) > best_cap) {
>> + best_cap = capacity_of(cpu);
>> + best_cpu = cpu;
>> + }
>> + }
> No need for the else. And you'll note you're once again inconsistent
> with your previous self.
>
> But here I worry about big.little a wee bit. I think we're allowed big
> and little cores on the same L3 these days, and you can't directly
> compare capacity between them.
>
>
<snip>

After pulling to the latest code I see that the changes by Mel Gorman
(commit 32e839dda3ba576943365f0f5817ce5c843137dc) have created a short
path for returning an idle CPU.

The fact that now there exists a short path, to bypass rest of
select_idle_sibling (SIS) is causing a regression in the
"hackbench + ping" testcase *when* I add capacity awareness in the baseline
code as was discussed here.

In details: baseline today has a short cut in the recent_used_cpu to
bypass SIS. When I add capacity awareness in the SIS code path, causing
that extra search to find a better CPU itself is taking more time than
the benefit it provides.

However, there are certain patches which reduce SIS cost while
maintaining a similar spread for threads on CPUs. When I use those
patches I see that the benefit for adding capacity awareness is
restored. Please suggest how to proceed on this.

Thanks,
Rohit

\
 
 \ /
  Last update: 2018-03-10 21:44    [W:0.022 / U:0.080 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site