Messages in this thread | | | Date | Sat, 16 May 2015 11:31:48 +0200 | From | Peter Zijlstra <> | Subject | Re: [RFC][PATCH 4/4] sched, numa: Ignore pinned tasks |
| |
On Fri, May 15, 2015 at 05:43:37PM +0200, Peter Zijlstra wrote: > static void account_numa_enqueue(struct rq *rq, struct task_struct *p) > { > + if (p->nr_cpus_allowed == 1) { > + p->numa_preferred_nid = -1; > + rq->nr_pinned_running++; > + } > rq->nr_numa_running += (p->numa_preferred_nid != -1); > rq->nr_preferred_running += (p->numa_preferred_nid == task_node(p)); > }
> static inline enum fbq_type fbq_classify_rq(struct rq *rq) > { > + unsigned int nr_migratable = rq->cfs.h_nr_running - rq->nr_pinned_running; > +
FWIW, there's a problem there with CFS bandwidth muck. When we throttle groups we update cfs.h_nr_running properly, but we do not hierarchically account the pinned, preferred and numa counts.
So that above subtraction can end up negative.
I've not yet decided what to do about this; ideally we'd do the hierarchical accounting of the numa stats -- but that's a little bit more expensive than I'd like.
A well. for monday that.
| |