lkml.org 
[lkml]   [2004]   [Oct]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [Lse-tech] [PATCH] cpusets - big numa cpu and memory placement
Andrew wrote:
> I'd be interested to know why this all cannot be done by a
> userspace daemon/server thing.

The biggest stumbling block was the binding of task to cpuset, the
task->cpuset pointer. I doubt you would accept a patch to the kernel
that called out to my daemon on every fork and exit, to update this
binding. We require a robust answer to the question of which tasks are
in a cpuset. And the loop to read this back out, which scans each task
to see if it points to a particular cpuset, would be significantly less
atomic than it is now, if it had to be done, one task at a time, from
user space.

A second stumbling block, which perhaps you can recommend some way to
deal with, is permissions. What's the recommended way for this daemon
to verify the authority of the requesting process?

Also the other means to poke the affinity masks, sched_setaffinity,
mbind and set_mempolicy, need to be constrained to respect cpuset
boundaries and honor exclusion. I doubt you want them calling out to a
user daemon either.

And the memory affinity mask, mems_allowed, seems to require updating
within the current task context. Perhaps someone else is smart enough
to see an alternative, but I could not find a safe way to update this
from outside the current context. So it's updated on the path going
into __alloc_pages(). I doubt you want a patch that calls out to my
daemon on each call into __alloc_pages().

We also need to begin correct placement earlier in the boot process
than when a user daemon could start. It's important to get init
and the early shared libraries placed. This part has reasons of
its own to be pre-init. I am able to do this in user space today,
because the kernel has cpuset support, but I'd have to fold at
least this much back into the kernel otherwise.

And of course the hooks I added to __alloc_pages, to only allow
allocations from nodes in the tasks mems_allowed, would still be needed,
in some form, just as the already existing schedulers check for
cpus_allowed are needed, in some form (perhaps less blunt).

The hook in the sched code to offline a cpu needs to know what else is
allowed in a tasks cpuset so it can honor the cpuset boundary, if
possible, when migrating the task off the departing cpu. Would you want
this code calling out to a user daemon to determine what cpu to use
next?

The cpuset file system seems like an excellent way to present a system
wide hierarchical name space. I guess that this could be done as a
mount handled by my user space daemon, but using vfs for this sure
seemed sweet at the time.

There's a Linus quote I'm trying to remember ... something about while
kernels have an important role in providing hardware access, their
biggest job is in providing a coherent view of system wide resources.
Does this ring a bell? I haven't been able to recall enough of the
actual wording to google it.

--
I won't rest till it's the best ...
Programmer, Linux Scalability
Paul Jackson <pj@sgi.com> 1.650.933.1373
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 14:06    [W:0.405 / U:0.028 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site