lkml.org 
[lkml]   [2012]   [Mar]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 11/39] autonuma: CPU follow memory algorithm
Hi,

On Wed, Mar 28, 2012 at 01:26:08PM +0200, Peter Zijlstra wrote:
> Right, so can we agree that the only case where they diverge is single
> processes that have multiple threads and are bigger than a single node (either
> in memory, cputime or both)?

I think it vastly diverges for processes that are smaller than one
node too. 1) your numa/sched goes blind with an almost arbitrary home
node, 2) your migrate-on-fault will be unable to provide an efficient
and steady async background migration.

> I've asked you several times why you care about that one case so much, but
> without answer.

If this case wasn't important to you, you wouldn't need to introduce
your syscalls.

> I'll grant you that unmodified such processes might do better with your
> stuff, however:
>
> - your stuff assumes there is a fair amount of locality to exploit.
>
> I'm not seeing how this is true in general, since data partitioning is hard
> and for those problems where its possible people tend to already do so,
> yielding natural points to add the syscalls.

Later, I plan to detect this and layout interleaved pages
automatically so you don't even need to manually set MPOL_INTERLEAVE.

> - your stuff doesn't actually nest, since a guest kernel has no clue as to
> what constitutes a node (or if there even is such a thing) it will randomly
> move tasks around on the vcpus, with complete disrespect for whatever host
> vcpu<->page mappings you set up.
>
> guest kernels actively scramble whatever relations you're building by
> scanning, destroying whatever (temporal) locality you think you might
> have found.

This shall work fine, running AutoNUMA in guest and host. qemu just
need to create a vtopology for the guest that matches the hardware
topology. Hard binds in the guest will also work great (they create
node locality too).

A paravirt layer could also hint the host on the vcpu switches to
shift the host numa stats across but I didn't thought too much on this
possible paravirt numa-sched optimization, it's not mandatory, just an idea.

> Related to this is that all applications that currently use mbind() and
> sched_setaffinity() are trivial to convert.

Too bad firefox isn't using mbind yet. My primary target are the 99%
of apps out there running on a 24way 2 node system or equivalent and
KVM.

I agree converting qemu to the syscalls would be trivial though.


\
 
 \ /
  Last update: 2012-03-28 20:43    [W:0.101 / U:0.028 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site