[lkml]   [2008]   [Dec]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
    SubjectRe: [patch] sched: fix uneven per-cpu task_group share distribution

    * Peter Zijlstra <> wrote:

    > On Mon, 2008-12-15 at 23:37 -0800, Ken Chen wrote:
    > > While testing CFS scheduler on linux-2.6-tip tree
    > > git://
    > >
    > > We found that task which is pinned to a CPU could be starved relative to its
    > > allocated fair share.
    > >
    > > The per-cpu sched_enetity load share calculation in tg_shares_up /
    > > update_group_shares_cpu distributes total task_group's share among all CPUs
    > > for a given SD domain, this would dilute task_group's per-cpu share because
    > > it distributes share to CPU that even has no load. The trapped share is now
    > > un-consumable and it leads to fair share starvation on the runnable CPU.
    > > Peter was right that it is still required for the low level function to make
    > > distinction between a boosted share that don't have any load and actual tg
    > > share that should be distributed among CPUs in which the tg is running.
    > >
    > > Patch to add that boost and we think the scheduler should only boost one
    > > times of tg shares over all empty CPU that don't have any load for the
    > > specific task_group in order to bound maximum temporary boost that a given
    > > task_group can have.
    > Yeah I was worried about you removing that boost, but since you seemed
    > to be doing serious testing on the thing.. A well, this sounds good.
    > > Signed-off-by: Ken Chen <>
    > OK, patch looks good too, thanks!
    > Acked-by: Peter Zijlstra <>

    applied to tip/cpus4096. [not in the tip/sched/core branch, because
    cpumask_t changes conflict with Ken's fix so this is the proper ordering.]


     \ /
      Last update: 2008-12-18 14:45    [W:0.023 / U:3.940 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site