lkml.org 
[lkml]   [2020]   [Apr]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH 1/4] sched/topology: Store root domain CPU capacity sum
From
Date
On 08.04.20 14:29, Vincent Guittot wrote:
> On Wed, 8 Apr 2020 at 11:50, Dietmar Eggemann <dietmar.eggemann@arm.com> wrote:

[...]

>> /**
>> diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
>> index 8344757bba6e..74b0c0fa4b1b 100644
>> --- a/kernel/sched/topology.c
>> +++ b/kernel/sched/topology.c
>> @@ -2052,12 +2052,17 @@ build_sched_domains(const struct cpumask *cpu_map, struct sched_domain_attr *att
>> /* Attach the domains */
>> rcu_read_lock();
>> for_each_cpu(i, cpu_map) {
>> + unsigned long cap = arch_scale_cpu_capacity(i);
>
> Why do you replace the use of rq->cpu_capacity_orig by
> arch_scale_cpu_capacity(i) ?
> There is nothing about this change in the commit message

True. And I can change this back.

It seems though that the solution is not sufficient because of the
'rd->span &nsub cpu_active_mask' issue discussed under patch 2/4.

But this remind me of another question I have.

Currently we use arch_scale_cpu_capacity() more often (16 times) than
capacity_orig_of()/rq->cpu_capacity_orig .

What's hindering us to remove rq->cpu_capacity_orig and the code around
it and solely rely on arch_scale_cpu_capacity()? I mean the arch
implementation should be fast.

\
 
 \ /
  Last update: 2020-04-08 18:31    [W:0.222 / U:0.024 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site