lkml.org 
[lkml]   [2024]   [Apr]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH v3 0/3] sched/fair: allow disabling sched_balance_newidle with sched_relax_domain_level
From
On 05/04/2024 12:25, Vitalii Bursov wrote:
>
>
> On 05.04.24 12:17, Dietmar Eggemann wrote:
>> On 03/04/2024 15:28, Vitalii Bursov wrote:

[...]

>>>> ====== ===========================================================
>>>> -1 no request. use system default or follow request of others.
>>>> 0 no search.
>>>> 1 search siblings (hyperthreads in a core).
>>>> 2 search cores in a package.
>>>> 3 search cpus in a node [= system wide on non-NUMA system]
>>>> 4 search nodes in a chunk of node [on NUMA system]
>>>> 5 search system wide [on NUMA system]
>>>> ====== ===========================================================
>>
>> IMHO, this list misses:
>>
>> 2 search cores in a cluster.
>>
>> Related to CONFIG_SCHED_CLUSTER.
>> Like you mentioned, if CONFIG_SCHED_CLUSTER is not configured MC becomes
>> level=1.
>
> Previous discussion in v2 on this topic:
> https://lore.kernel.org/linux-kernel/78c60269-5aee-45d7-8014-2c0188f972da@bursov.com/T/#maf4ad0ef3b8c18c8bb3e3524c683b6459c6f7f64

Sorry, I missed this discussion.

I thought that SCHED_CLUSTER is based on shared L3 tags (Arm64
kunpeng920) or L2 cache (X86 Jacobsville) so it's similar to SCHED_MC
just one level down?

> The table certainly depends on the kernel configuraion and describing this
> dependency in detail probably isn't worth it, so how the table should look
> like in the documentation is debatable...

[...]

\
 
 \ /
  Last update: 2024-05-27 16:26    [W:0.078 / U:0.780 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site