lkml.org 
[lkml]   [2019]   [Nov]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v2] sched/numa: advanced per-cgroup numa statistic
From
Date
Hi, Michal

On 2019/11/2 上午1:39, Michal Koutný wrote:
> Hello Yun.
>
> On Tue, Oct 29, 2019 at 03:57:20PM +0800, 王贇 <yun.wang@linux.alibaba.com> wrote:
>> +static void update_numa_statistics(struct cfs_rq *cfs_rq)
>> +{
>> + int idx;
>> + unsigned long remote = current->numa_faults_locality[3];
>> + unsigned long local = current->numa_faults_locality[4];
>> +
>> + cfs_rq->nstat.jiffies++;
> This statistics effectively doubles what
> kernel/sched/cpuacct.c:cpuacct_charge() does (measuring per-cpu time).
> Hence it seems redundant.

Yes, while there is no guarantee the cpu cgroup always binding
with cpuacct in v1, we can't rely on that...

>
>> +
>> + if (!remote && !local)
>> + return;
>> +
>> + idx = (NR_NL_INTERVAL - 1) * local / (remote + local);
>> + cfs_rq->nstat.locality[idx]++;
> IIUC, the mechanism numa_faults_locality values, this statistics only
> estimates the access locality based on NUMA balancing samples, i.e.
> there exists more precise source of that information.>
> All in all, I'd concur to Mel's suggestion of external measurement.

Currently I can only find numa balancing who is telling the real story,
at least we know after the PF, task do access the page on that CPU,
although it can't cover all the cases, it still giving good hints :-)

It would be great if we could find more similar indicators, like the
migration failure counter Mel mentioned, which give good hints on
memory policy problems, could be used as external measurement.

Regards,
Michael Wang

>
> Michal
>

\
 
 \ /
  Last update: 2019-11-02 02:13    [W:0.603 / U:0.008 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site