lkml.org 
[lkml]   [2012]   [Dec]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 00/49] Automatic NUMA Balancing v10

* Ingo Molnar <mingo@kernel.org> wrote:

> > This is prototype only but what I was using as a reference
> > to see could I spot a problem in yours. It has not been even
> > boot tested but avoids remote->remote copies, contending on
> > PTL or holding it longer than necessary (should anyway)
>
> So ... because time is running out and it would be nice to
> progress with this for v3.8, I'd suggest the following
> approach:
>
> - Please send your current tree to Linus as-is. You already
> have my Acked-by/Reviewed-by for its scheduler bits, and my
> testing found your tree to have no regression to mainline,
> plus it's a nice win in a number of NUMA-intense workloads.
> So it's a good, monotonic step forward in terms of NUMA
> balancing, very close to what the bits I'm working on need as
> infrastructure.
>
> - I'll rebase all my devel bits on top of it. Instead of
> removing the migration bandwidth I'll simply increase it for
> testing - this should trigger similarly aggressive behavior.
> I'll try to touch as little of the mm/ code as possible, to
> keep things debuggable.

One minor last-minute request/nit before you send it to Linus,
would you mind doing a:

CONFIG_BALANCE_NUMA => CONFIG_NUMA_BALANCING

rename please? (I can do it for you if you don't have the time.)

CONFIG_NUMA_BALANCING is really what fits into our existing NUMA
namespace, CONFIG_NUMA, CONFIG_NUMA_EMU - and, more importantly,
the ordering of words follows the common generic -> less generic
ordering we do in the kernel for config names and methods.

So it would fit nicely into existing Kconfig naming schemes:

CONFIG_TRACING
CONFIG_FILE_LOCKING
CONFIG_EVENT_TRACING

etc.

Thanks,

Ingo


\
 
 \ /
  Last update: 2012-12-11 10:41    [W:0.172 / U:0.124 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site