lkml.org 
[lkml]   [2005]   [Apr]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: [RFC PATCH] Dynamic sched domains aka Isolated cpusets
    Dinakar Guniguntala wrote:
    > Here's an attempt at dynamic sched domains aka isolated cpusets
    >

    Very good, I was wondering when someone would try to implement this ;)

    It needs some work. A few initial comments on the kernel/sched.c change
    - sorry, don't have too much time right now...

    > --- linux-2.6.12-rc1-mm1.orig/kernel/sched.c 2005-04-18 00:46:40.000000000 +0530
    > +++ linux-2.6.12-rc1-mm1/kernel/sched.c 2005-04-18 00:47:55.000000000 +0530
    > @@ -4895,40 +4895,41 @@ static void check_sibling_maps(void)
    > }
    > #endif
    >
    > -/*
    > - * Set up scheduler domains and groups. Callers must hold the hotplug lock.
    > - */
    > -static void __devinit arch_init_sched_domains(void)
    > +static void attach_domains(cpumask_t cpu_map)
    > {

    This shouldn't be needed. There should probably just be one place that
    attaches all domains. It is a bit difficult to explain what I mean when
    you have 2 such places below.

    [...]

    > +void rebuild_sched_domains(cpumask_t change_map, cpumask_t span1, cpumask_t span2)
    > +{

    Interface isn't bad. It would seem to be able to handle everything, but
    I think it can be made a bit simpler.

    fn_name(cpumask_t span1, cpumask_t span2)

    Yeah? The change_map is implicitly the union of the 2 spans. Also I don't
    really like the name. It doesn't rebuild so much as split (or join). I
    can't think of anything good off the top of my head.

    > + unsigned long flags;
    > + int i;
    > +
    > + local_irq_save(flags);
    > +
    > + for_each_cpu_mask(i, change_map)
    > + spin_lock(&cpu_rq(i)->lock);
    > +

    Locking is wrong. And it has changed again in the latest -mm kernel.
    Please diff against that.

    > + if (!cpus_empty(span1))
    > + build_sched_domains(span1);
    > + if (!cpus_empty(span2))
    > + build_sched_domains(span2);
    > +

    You also can't do this - you have to 'offline' the domains first before
    building new ones. See the CPU hotplug code that handles this.

    [...]

    > @@ -5046,13 +5082,13 @@ static int update_sched_domains(struct n
    > unsigned long action, void *hcpu)
    > {
    > int i;
    > + cpumask_t temp_map, hotcpu = cpumask_of_cpu((long)hcpu);
    >
    > switch (action) {
    > case CPU_UP_PREPARE:
    > case CPU_DOWN_PREPARE:
    > - for_each_online_cpu(i)
    > - cpu_attach_domain(&sched_domain_dummy, i);
    > - arch_destroy_sched_domains();
    > + cpus_andnot(temp_map, cpu_online_map, hotcpu);
    > + rebuild_sched_domains(cpu_online_map, temp_map, CPU_MASK_NONE);

    This makes a hotplug event destroy your nicely set up isolated domains,
    doesn't it?

    This looks like the most difficult problem to overcome. It needs some
    external information to redo the cpuset splits at cpu hotplug time.
    Probably a hotplug handler in the cpusets code might be the best way
    to do that.

    --
    SUSE Labs, Novell Inc.


    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.kernel.org
    More majordomo info at http://vger.kernel.org/majordomo-info.html
    Please read the FAQ at http://www.tux.org/lkml/

    \
     
     \ /
      Last update: 2005-04-19 01:48    [W:2.375 / U:0.396 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site