Messages in this thread | | | Date | Tue, 16 Jun 2015 13:18:51 -0400 | From | Rik van Riel <> | Subject | Re: [PATCH v2 3/4] sched:Fix task_numa_migrate to always update preferred node |
| |
On 06/16/2015 07:56 AM, Srikar Dronamraju wrote:
> @@ -1519,16 +1519,9 @@ static int task_numa_migrate(struct task_struct *p) > * and is migrating into one of the workload's active nodes, remember > * this node as the task's preferred numa node, so the workload can > * settle down. > - * A task that migrated to a second choice node will be better off > - * trying for a better one later. Do not set the preferred node here. > */ > if (p->numa_group) { > - if (env.best_cpu == -1) > - nid = env.src_nid; > - else > - nid = env.dst_nid; > - > - if (node_isset(nid, p->numa_group->active_nodes)) > + if (env.dst_nid != p->numa_preferred_nid) > sched_setnuma(p, env.dst_nid); > }
Looking at the original code again, it looks like my code has a potential bug (or at least downside), too.
We set p->numa_group->active_nodes depending on which nodes a group triggers many NUMA faults happen from (the CPUs the tasks in the group were running on when they had NUMA faults).
This means if a workload is not yet converged, the active_nodes mask may be much larger than desired, and we can end up setting p->numa_preferred_nid to a node that is currently in the active_nodes mask, but really shouldn't be...
I have no ideas on how to improve that situation, though :)
| |