lkml.org 
[lkml]   [2016]   [Nov]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    Date
    From
    Subject[tip:sched/core] sched/fair: Propagate asynchrous detach
    Commit-ID:  4e5160766fcc9f41bbd38bac11f92dce993644aa
    Gitweb: http://git.kernel.org/tip/4e5160766fcc9f41bbd38bac11f92dce993644aa
    Author: Vincent Guittot <vincent.guittot@linaro.org>
    AuthorDate: Tue, 8 Nov 2016 10:53:46 +0100
    Committer: Ingo Molnar <mingo@kernel.org>
    CommitDate: Wed, 16 Nov 2016 10:29:10 +0100

    sched/fair: Propagate asynchrous detach

    A task can be asynchronously detached from cfs_rq when migrating
    between CPUs. The load of the migrated task is then removed from
    source cfs_rq during its next update. We use this event to set
    propagation flag.

    During the load balance, we take advantage of the update of blocked
    load to propagate any pending changes.

    The propagation relies on patch:

    "sched: Fix hierarchical order in rq->leaf_cfs_rq_list"

    ... which orders children and parents, to ensure that it's done in one pass.

    Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Acked-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Morten.Rasmussen@arm.com
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Cc: bsegall@google.com
    Cc: kernellwp@gmail.com
    Cc: pjt@google.com
    Cc: yuyang.du@intel.com
    Link: http://lkml.kernel.org/r/1478598827-32372-6-git-send-email-vincent.guittot@linaro.org
    Signed-off-by: Ingo Molnar <mingo@kernel.org>
    ---
    kernel/sched/fair.c | 6 ++++++
    1 file changed, 6 insertions(+)

    diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
    index 8cf26fd..090a9bb 100644
    --- a/kernel/sched/fair.c
    +++ b/kernel/sched/fair.c
    @@ -3219,6 +3219,7 @@ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq, bool update_freq)
    sub_positive(&sa->load_avg, r);
    sub_positive(&sa->load_sum, r * LOAD_AVG_MAX);
    removed_load = 1;
    + set_tg_cfs_propagate(cfs_rq);
    }

    if (atomic_long_read(&cfs_rq->removed_util_avg)) {
    @@ -3226,6 +3227,7 @@ update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq, bool update_freq)
    sub_positive(&sa->util_avg, r);
    sub_positive(&sa->util_sum, r * LOAD_AVG_MAX);
    removed_util = 1;
    + set_tg_cfs_propagate(cfs_rq);
    }

    decayed = __update_load_avg(now, cpu_of(rq_of(cfs_rq)), sa,
    @@ -6872,6 +6874,10 @@ static void update_blocked_averages(int cpu)

    if (update_cfs_rq_load_avg(cfs_rq_clock_task(cfs_rq), cfs_rq, true))
    update_tg_load_avg(cfs_rq, 0);
    +
    + /* Propagate pending load changes to the parent */
    + if (cfs_rq->tg->se[cpu])
    + update_load_avg(cfs_rq->tg->se[cpu], 0);
    }
    raw_spin_unlock_irqrestore(&rq->lock, flags);
    }
    \
     
     \ /
      Last update: 2016-11-16 13:18    [W:2.446 / U:0.156 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site