lkml.org 
[lkml]   [2014]   [Sep]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [PATCH] x86: Consider multiple nodes in a single socket to be "sane"
    On Tue, 16 Sep 2014 08:44:03 +0200
    Ingo Molnar <mingo@kernel.org> wrote:

    >
    > * Chuck Ebbert <cebbert.lkml@gmail.com> wrote:
    >
    > > On Tue, 16 Sep 2014 05:29:20 +0200
    > > Peter Zijlstra <peterz@infradead.org> wrote:
    > >
    > > > On Mon, Sep 15, 2014 at 03:26:41PM -0700, Dave Hansen wrote:
    > > > >
    > > > > I'm getting the spew below when booting with Haswell (Xeon
    > > > > E5-2699) CPUs and the "Cluster-on-Die" (CoD) feature
    > > > > enabled in the BIOS.
    > > >
    > > > What is that cluster-on-die thing? I've heard it before but
    > > > never could find anything on it.
    > >
    > > Each CPU has 2.5MB of L3 connected together in a ring that
    > > makes it all act like a single shared cache. The HW tries to
    > > place the data so it's closest to the CPU that uses it. On the
    > > larger processors there are two rings with an interconnect
    > > between them that adds latency if a cache fetch has to cross
    > > that. CoD breaks that connection and effectively gives you two
    > > nodes on one die.
    >
    > Note that that's not really a 'NUMA node' in the way lots of
    > places in the kernel assume it: permanent placement assymetry
    > (and access cost assymetry) of RAM.
    >
    > It's a new topology construct that needs new handling (and
    > probably a new mask): Non Uniform Cache Architecture (NUCA)
    > or so.

    Hmm, looking closer at the diagram, each ring has its own memory controller, so
    it really is NUMA if you break the interconnect between that caches.


    \
     
     \ /
      Last update: 2014-09-16 09:41    [W:3.290 / U:0.936 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site