lkml.org 
[lkml]   [2021]   [Jan]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRE: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and add cluster scheduler
Date


> -----Original Message-----
> From: Morten Rasmussen [mailto:morten.rasmussen@arm.com]
> Sent: Saturday, January 9, 2021 4:13 AM
> To: Tim Chen <tim.c.chen@linux.intel.com>
> Cc: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>;
> valentin.schneider@arm.com; catalin.marinas@arm.com; will@kernel.org;
> rjw@rjwysocki.net; vincent.guittot@linaro.org; lenb@kernel.org;
> gregkh@linuxfoundation.org; Jonathan Cameron <jonathan.cameron@huawei.com>;
> mingo@redhat.com; peterz@infradead.org; juri.lelli@redhat.com;
> dietmar.eggemann@arm.com; rostedt@goodmis.org; bsegall@google.com;
> mgorman@suse.de; mark.rutland@arm.com; sudeep.holla@arm.com;
> aubrey.li@linux.intel.com; linux-arm-kernel@lists.infradead.org;
> linux-kernel@vger.kernel.org; linux-acpi@vger.kernel.org;
> linuxarm@openeuler.org; xuwei (O) <xuwei5@huawei.com>; Zengtao (B)
> <prime.zeng@hisilicon.com>; tiantao (H) <tiantao6@hisilicon.com>
> Subject: Re: [RFC PATCH v3 0/2] scheduler: expose the topology of clusters and
> add cluster scheduler
>
> On Thu, Jan 07, 2021 at 03:16:47PM -0800, Tim Chen wrote:
> > On 1/6/21 12:30 AM, Barry Song wrote:
> > > ARM64 server chip Kunpeng 920 has 6 clusters in each NUMA node, and each
> > > cluster has 4 cpus. All clusters share L3 cache data while each cluster
> > > has local L3 tag. On the other hand, each cluster will share some
> > > internal system bus. This means cache is much more affine inside one cluster
> > > than across clusters.
> >
> > There is a similar need for clustering in x86. Some x86 cores could share
> L2 caches that
> > is similar to the cluster in Kupeng 920 (e.g. on Jacobsville there are 6 clusters
> > of 4 Atom cores, each cluster sharing a separate L2, and 24 cores sharing
> L3).
> > Having a sched domain at the L2 cluster helps spread load among
> > L2 domains. This will reduce L2 cache contention and help with
> > performance for low to moderate load scenarios.
>
> IIUC, you are arguing for the exact opposite behaviour, i.e. balancing
> between L2 caches while Barry is after consolidating tasks within the
> boundaries of a L3 tag cache. One helps cache utilization, the other
> communication latency between tasks. Am I missing something?

Morten, this is not true.

we are both actually looking for the same behavior. My patch also
has done the exact same behavior of spreading with Tim's patch.

Considering the below two cases:
Case 1. we have two tasks without any relationship running in a system with 2 clusters and 8 cpus.

Without the sched_domain of cluster, these two tasks might be put as below:
+-------------------+ +-----------------+
| +----+ +----+ | | |
| |task| |task| | | |
| |1 | |2 | | | |
| +----+ +----+ | | |
| | | |
| cluster1 | | cluster2 |
+-------------------+ +-----------------+
With the sched_domain of cluster, load balance will spread them as below:
+-------------------+ +-----------------+
| +----+ | | +----+ |
| |task| | | |task| |
| |1 | | | |2 | |
| +----+ | | +----+ |
| | | |
| cluster1 | | cluster2 |
+-------------------+ +-----------------+

Then task1 and tasks2 get more cache and decrease cache contention.
They will get better performance.

That is what my original patch also can make. And tim's patch
is also doing. Once we add a sched_domain, load balance will
get involved.


Case 2. we have 8 tasks, running in a system with 2 clusters and 8 cpus.
But they are working in 4 groups:
Task1 wakes up task4
Task2 wakes up task5
Task3 wakes up task6
Task4 wakes up task7

With my changing in select_idle_sibling, the WAKE_AFFINE mechanism will
try to put task1 and 4, task2 and 5, task3 and 6, task4 and 7 in same clusters rather
than putting all of them in the random one of the 8 cpus. However, the 8 tasks
are still spreading among the 8 cpus with my change in select_idle_sibling
as load balance is still working.

+---------------------------+ +----------------------+
| +----+ +-----+ | | +----+ +-----+ |
| |task| |task | | | |task| |task | |
| |1 | | 4 | | | |2 | |5 | |
| +----+ +-----+ | | +----+ +-----+ |
| | | |
| cluster1 | | cluster2 |
| | | |
| | | |
| +-----+ +------+ | | +-----+ +------+ |
| |task | | task | | | |task | |task | |
| |3 | | 6 | | | |4 | |8 | |
| +-----+ +------+ | | +-----+ +------+ |
+---------------------------+ +----------------------+

Let's consider the 3rd case, that one would be more tricky:

task1 and task2 have close relationship and they are waker-wakee pair.
With my current patch, select_idle_sidling() wants to put them in one
cluster, load balance wants to put them in two clusters. Load balance will win.
Then maybe we need some same mechanism like adjusting numa imbalance:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/kernel/sched/fair.c?id=b396f52326de20
if we permit a light imbalance between clusters, select_idle_sidling()
will win. And task1 and task2 get better cache affinity.

The 3rd case could be our goal for next step.


Thanks
Barry

\
 
 \ /
  Last update: 2021-01-08 22:32    [W:0.231 / U:0.296 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site