Messages in this thread | ![/](/images/icornerl.gif) | | Date | Fri, 5 Apr 2024 11:17:14 +0200 | Subject | Re: [PATCH v3 0/3] sched/fair: allow disabling sched_balance_newidle with sched_relax_domain_level | From | Dietmar Eggemann <> |
| |
On 03/04/2024 15:28, Vitalii Bursov wrote: > Changes in v3: > - Remove levels table change from the documentation patch > - Link to v2: https://lore.kernel.org/lkml/cover.1711900396.git.vitaly@bursov.com/ > Changes in v2: > - Split debug.c change in a separate commit and move new "level" > after "groups_flags" > - Added "Fixes" tag and updated commit message > - Update domain levels cgroup-v1/cpusets.rst documentation > - Link to v1: https://lore.kernel.org/all/cover.1711584739.git.vitaly@bursov.com/ > > During the upgrade from Linux 5.4 we found a small (around 3%) > performance regression which was tracked to commit > c5b0a7eefc70150caf23e37bc9d639c68c87a097 > > sched/fair: Remove sysctl_sched_migration_cost condition > > With a default value of 500us, sysctl_sched_migration_cost is > significanlty higher than the cost of load_balance. Remove the > condition and rely on the sd->max_newidle_lb_cost to abort > newidle_balance. > > Looks like "newidle" balancing is beneficial for a lot of workloads, > just not for this specific one. The workload is video encoding, there > are 100s-1000s of threads, some are synchronized with mutexes and > conditional variables. The process aims to have a portion of CPU idle, > so no CPU cores are 100% busy. Perhaps, the performance impact we see > comes from additional processing in the scheduler and additional cost > like more cache misses, and not from an incorrect balancing. See > perf output below. > > My understanding is that "sched_relax_domain_level" cgroup parameter > should control if sched_balance_newidle() is called and what's the scope > of the balancing is, but it doesn't fully work for this case. > > cpusets.rst documentation: >> The 'cpuset.sched_relax_domain_level' file allows you to request changing >> this searching range as you like. This file takes int value which >> indicates size of searching range in levels ideally as follows, >> otherwise initial value -1 that indicates the cpuset has no request. >> >> ====== =========================================================== >> -1 no request. use system default or follow request of others. >> 0 no search. >> 1 search siblings (hyperthreads in a core). >> 2 search cores in a package. >> 3 search cpus in a node [= system wide on non-NUMA system] >> 4 search nodes in a chunk of node [on NUMA system] >> 5 search system wide [on NUMA system] >> ====== ===========================================================
IMHO, this list misses:
2 search cores in a cluster.
Related to CONFIG_SCHED_CLUSTER. Like you mentioned, if CONFIG_SCHED_CLUSTER is not configured MC becomes level=1.
I ran this on an Arm64 TaiShan 2280 v2, Kunpeng 920 - 4826 server:
$ numactl -H | tail -6 node distances: node 0 1 2 3 0: 10 12 20 22 1: 12 10 22 24 2: 20 22 10 12 3: 22 24 12 10
$ head -8 /proc/schedstat | awk '{ print $1 " " $2 }' | tail -5 domain0 00000000,00000000,0000000f domain1 00000000,00000000,00ffffff domain2 00000000,0000ffff,ffffffff domain3 000000ff,ffffffff,ffffffff domain4 ffffffff,ffffffff,ffffffff
with additional debug:
[ 18.196484] build_sched_domain() cpu=0 name=SMT level=0 [ 18.202308] build_sched_domain() cpu=0 name=CLS level=1 [ 18.208188] build_sched_domain() cpu=0 name=MC level=2 [ 18.222550] build_sched_domain() cpu=0 name=PKG level=3 [ 18.228371] build_sched_domain() cpu=0 name=NODE level=4 [ 18.234515] build_sched_domain() cpu=0 name=NUMA level=5 [ 18.246400] build_sched_domain() cpu=0 name=NUMA level=6 [ 18.258841] build_sched_domain() cpu=0 name=NUMA level=7
/* search cores in a cluster */ # echo 2 > /sys/fs/cgroup/cpuset/cpuset.sched_relax_domain_level
# grep . /sys/kernel/debug/sched/domains/cpu0/*/{name,flags,level} /sys/kernel/debug/sched/domains/cpu0/domain0/name:CLS /sys/kernel/debug/sched/domains/cpu0/domain1/name:MC /sys/kernel/debug/sched/domains/cpu0/domain2/name:NUMA /sys/kernel/debug/sched/domains/cpu0/domain3/name:NUMA /sys/kernel/debug/sched/domains/cpu0/domain4/name:NUMA /sys/kernel/debug/sched/domains/cpu0/domain0/flags:SD_BALANCE_NEWIDLE SD_BALANCE_EXEC SD_BALANCE_FORK SD_WAKE_AFFINE SD_CLUSTER SD_SHARE_LLC SD_PREFER_SIBLING /sys/kernel/debug/sched/domains/cpu0/domain1/flags:SD_BALANCE_EXEC SD_BALANCE_FORK SD_WAKE_AFFINE SD_SHARE_LLC SD_PREFER_SIBLING /sys/kernel/debug/sched/domains/cpu0/domain2/flags:SD_BALANCE_EXEC SD_BALANCE_FORK SD_WAKE_AFFINE SD_SERIALIZE SD_OVERLAP SD_NUMA /sys/kernel/debug/sched/domains/cpu0/domain3/flags:SD_BALANCE_EXEC SD_BALANCE_FORK SD_WAKE_AFFINE SD_SERIALIZE SD_OVERLAP SD_NUMA /sys/kernel/debug/sched/domains/cpu0/domain4/flags:SD_BALANCE_EXEC SD_BALANCE_FORK SD_WAKE_AFFINE SD_SERIALIZE SD_OVERLAP SD_NUMA /sys/kernel/debug/sched/domains/cpu0/domain0/level:1 /sys/kernel/debug/sched/domains/cpu0/domain1/level:2 /sys/kernel/debug/sched/domains/cpu0/domain2/level:5 /sys/kernel/debug/sched/domains/cpu0/domain3/level:6 /sys/kernel/debug/sched/domains/cpu0/domain4/level:7
LGTM.
Tested-by: Dietmar Eggemann <dietmar.eggemann@arm.com> > Setting cpuset.sched_relax_domain_level to 0 works as 1. > > On a dual-CPU server, domains and levels are as follows: > domain 0: level 0, SMT > domain 1: level 2, MC
This is with CONFIG_SCHED_CLUSTER=y ?
[...]
| ![\](/images/icornerr.gif) |