lkml.org 
[lkml]   [2004]   [Nov]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: Externalize SLIT table
Date
On Thursday 04 November 2004 15:13, Jack Steiner wrote:
> I think it would also be useful to have a similar cpu-to-cpu distance
> metric:
>         % cat /sys/devices/system/cpu/cpu0/distance
>         10 20 40 60
>
> This gives the same information but is cpu-centric rather than
> node centric.

I don't see the use of that once you have some way to find the logical
CPU to node number mapping. The "node distances" are meant to be
proportional to the memory access latency ratios (20 means 2 times
larger than local (intra-node) access, which is by definition 10).
If the cpu_to_cpu distance is necessary because there is a hierarchy
in the memory blocks inside one node, then maybe the definition of a
node should be changed...

We currently have (at least in -mm kernels):
% ls /sys/devices/system/node/node0/cpu*
for finding out which CPUs belong to which nodes. Together with
/sys/devices/system/node/node0/distances
this should be enough for user space NUMA tools.

Regards,
Erich

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 14:07    [W:0.191 / U:0.632 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site