Messages in this thread | | | Date | Thu, 13 Apr 2017 17:48:12 +0200 | From | Peter Zijlstra <> | Subject | Re: [RFC 2/3] sched/topology: fix sched groups on NUMA machines with mesh topology |
| |
On Thu, Apr 13, 2017 at 10:56:08AM -0300, Lauro Ramos Venancio wrote: > Currently, on a 4 nodes NUMA machine with ring topology, two sched > groups are generated for the last NUMA sched domain. One group has the > CPUs from NUMA nodes 3, 0 and 1; the other group has the CPUs from nodes > 1, 2 and 3. As CPUs from nodes 1 and 3 belongs to both groups, the > scheduler is unable to directly move tasks between these nodes. In the > worst scenario, when a set of tasks are bound to nodes 1 and 3, the > performance is severely impacted because just one node is used while the > other node remains idle.
I feel a picture would be ever so much clearer.
> This patch constructs the sched groups from each CPU perspective. So, on > a 4 nodes machine with ring topology, while nodes 0 and 2 keep the same > groups as before [(3, 0, 1)(1, 2, 3)], nodes 1 and 3 have new groups > [(0, 1, 2)(2, 3, 0)]. This allows moving tasks between any node 2-hops > apart.
So I still have no idea what specifically goes wrong and how this fixes it. Changelog is impenetrable.
"From each CPU's persepective" doesn't really help, there already is a for_each_cpu() in.
Also, since I'm not sure what happend to the 4 node system, I cannot begin to imagine what would happen on the 8 node one.
| |