lkml.org 
[lkml]   [2011]   [Dec]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    SubjectRe: Re: [patch] ARM: smpboot: Enable interrupts after marking CPU online/active
    From
    Date

    With the note that we really should clean up the cpu hotplug code,
    there's far too much code duplication in the arch/ implementations. The
    below patch looks like it might actually work.

    Thomas, others, please have a very close look because I might have
    thought too hard and missed the obvious :-)

    ---
    Subject: hotplug, sched: cpu_active vs __cpu_up

    Stepan found:

    CPU0 CPUn

    _cpu_up()
    __cpu_up()

    boostrap()
    notify_cpu_starting()
    set_cpu_online()
    while (!cpu_active())
    cpu_relax()


    <PREEMPT-out>

    smp_call_function(.wait=1)
    /* we find cpu_online() is true */
    arch_send_call_function_ipi_mask()

    /* wait-forever-more */

    <PREEMPT-in>
    local_irq_enable()

    cpu_notify(CPU_ONLINE)
    sched_cpu_active()
    set_cpu_active()

    Now the purpose of cpu_active is mostly with bringing down a cpu, where
    we mark it !active to avoid the load-balancer from moving tasks to it
    while we tear down the cpu. This is required because we only update the
    sched_domain tree after we brought the cpu-down. And this is needed so
    that some tasks can still run while we bring it down, we just don't want
    new tasks to appear.

    On cpu-up however the sched_domain tree doesn't yet include the new cpu,
    so its invisible to the load-balancer, regardless of the active state.
    So instead of setting the active state after we boot the new cpu (and
    consequently having to wait for it before enabling interrupts) set the
    cpu active before we set it online and avoid the whole mess.

    Reported-by: Stepan Moskovchenko <stepanm@codeaurora.org>
    Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
    ---
    arch/arm/kernel/smp.c | 7 -------
    arch/hexagon/kernel/smp.c | 2 --
    arch/s390/kernel/smp.c | 6 ------
    arch/x86/kernel/smpboot.c | 13 -------------
    kernel/sched/core.c | 2 +-
    5 files changed, 1 insertion(+), 29 deletions(-)

    --- a/arch/arm/kernel/smp.c
    +++ b/arch/arm/kernel/smp.c
    @@ -337,13 +337,6 @@ asmlinkage void __cpuinit secondary_star
    */
    percpu_timer_setup();

    - while (!cpu_active(cpu))
    - cpu_relax();
    -
    - /*
    - * cpu_active bit is set, so it's safe to enalbe interrupts
    - * now.
    - */
    local_irq_enable();
    local_fiq_enable();

    --- a/arch/hexagon/kernel/smp.c
    +++ b/arch/hexagon/kernel/smp.c
    @@ -179,8 +179,6 @@ void __cpuinit start_secondary(void)
    printk(KERN_INFO "%s cpu %d\n", __func__, current_thread_info()->cpu);

    set_cpu_online(cpu, true);
    - while (!cpumask_test_cpu(cpu, cpu_active_mask))
    - cpu_relax();
    local_irq_enable();

    cpu_idle();
    --- a/arch/s390/kernel/smp.c
    +++ b/arch/s390/kernel/smp.c
    @@ -518,12 +518,6 @@ int __cpuinit start_secondary(void *cpuv
    S390_lowcore.restart_psw.addr =
    PSW_ADDR_AMODE | (unsigned long) psw_restart_int_handler;
    __ctl_set_bit(0, 28); /* Enable lowcore protection */
    - /*
    - * Wait until the cpu which brought this one up marked it
    - * active before enabling interrupts.
    - */
    - while (!cpumask_test_cpu(smp_processor_id(), cpu_active_mask))
    - cpu_relax();
    local_irq_enable();
    /* cpu_idle will call schedule for us */
    cpu_idle();
    --- a/arch/x86/kernel/smpboot.c
    +++ b/arch/x86/kernel/smpboot.c
    @@ -291,19 +291,6 @@ notrace static void __cpuinit start_seco
    per_cpu(cpu_state, smp_processor_id()) = CPU_ONLINE;
    x86_platform.nmi_init();

    - /*
    - * Wait until the cpu which brought this one up marked it
    - * online before enabling interrupts. If we don't do that then
    - * we can end up waking up the softirq thread before this cpu
    - * reached the active state, which makes the scheduler unhappy
    - * and schedule the softirq thread on the wrong cpu. This is
    - * only observable with forced threaded interrupts, but in
    - * theory it could also happen w/o them. It's just way harder
    - * to achieve.
    - */
    - while (!cpumask_test_cpu(smp_processor_id(), cpu_active_mask))
    - cpu_relax();
    -
    /* enable local interrupts */
    local_irq_enable();

    --- a/kernel/sched/core.c
    +++ b/kernel/sched/core.c
    @@ -5390,7 +5390,7 @@ static int __cpuinit sched_cpu_active(st
    unsigned long action, void *hcpu)
    {
    switch (action & ~CPU_TASKS_FROZEN) {
    - case CPU_ONLINE:
    + case CPU_STARTING:
    case CPU_DOWN_FAILED:
    set_cpu_active((long)hcpu, true);
    return NOTIFY_OK;


    \
     
     \ /
      Last update: 2011-12-15 17:13    [W:0.047 / U:30.488 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site