lkml.org 
[lkml]   [2006]   [Oct]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [RFC] cpuset: add interface to isolated cpus
    Nick wrote:
    > I'm talking about isolcpus. What do isolcpus have to do with cpusets.c?
    > You can turn off CONFIG_CPUSETS and still use isolcpusets, can't you?

    The connection of isolcpus with cpusets is about as strong (or weak)
    as the connection of sched domain partitioning with cpusets.

    Both isolcpus and sched domain partitions are tweaks to the scheduler
    domain code, to lessen the impact of load balancing, by making sched
    domains smaller, or by avoiding some cpus entirely.

    Cpusets is concerned with the placement of tasks on cpus and nodes.

    That's more related to sched domains (and hence to isolated cpus and
    sched domain partitioning) than it is, say, to the crypto code for
    generating randomly random numbers from /dev/random.

    But I will grant it is not a strong connection.

    Apparently you were seeing the potential for a stronger connection
    between cpusets and sched domain partitioning, because you thought
    that the cpuset configuration naturally defined some partitions of
    the systems cpus, that it would behoove the sched domain partitioning
    code to take advantage of.

    Unfortunately, cpusets define no such partitioning ... not system wide.

    > So if you have a particular policy you need to implement, which is
    > one cpus_exclusive cpuset off the root, covering half the cpus in
    > the system (as a simple example)... why is it good to implement
    > that with set_cpus_allowed and bad to implement it with partitions?

    A cpu_exclusive cpuset does not implement the policy of partitioning
    a system, such that no one else can use those cpus. It implements
    a policy of limiting the sharing of cpus, just to ones cpuset ancestors
    and descendents - no distant cousins. That makes it easier for system
    admins and batch schedulers to manage the sharing of resources to meet
    their needs. They have fewer places to worry about that might be
    intruding.

    It would be reasonable, for example, to do the following. One could
    have a big job, that depended on scheduler load balancing, that ran
    in the top cpuset, covering the entire system. And one could have
    smaller jobs, in child cpusets. During the day one pauses the big
    job and lets the smaller jobs run. During the night, one does the
    reverse. Granted, this example is a little bit hypothetical. Things
    like this are done, but usually a couple layers further down the
    cpuset hierarchy. At the top level, one common configuration that
    I am aware of puts almost the entire system into one cpuset, to be
    managed by the batch scheduler, and just a handful of cpus into a
    separate cpuset, to handle the classic Unix load (init, daemons, cron,
    admins login.)

    --
    I won't rest till it's the best ...
    Programmer, Linux Scalability
    Paul Jackson <pj@sgi.com> 1.925.600.0401
    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

    \
     
     \ /
      Last update: 2006-10-20 22:03    [W:4.228 / U:0.488 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site