lkml.org 
[lkml]   [2012]   [Jan]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    Date
    From
    SubjectRe: [PATCH] x86,sched: Fix sched_smt_power_savings totally broken
    > Note, this has the hard-coded assumption you only have 2 threads per
    > core, which while true for intel, isn't true in general. I think you
    > meant to write *= group->group_weight or somesuch.
    >
    > Also, you forgot to limit this to the SD_SHARE_CPUPOWER domain, you're
    > now doubling the capacity for all domains.
    >
    > Furthermore, have a look at the SD_PREFER_SIBLING logic and make sure
    > you're not fighting that.
    >
    Thanks Peter! Here is the patch.

    -Youquan

    diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
    index a4d2b7a..4ada3e7 100644
    --- a/kernel/sched/fair.c
    +++ b/kernel/sched/fair.c
    @@ -3923,6 +3923,10 @@ static inline void update_sg_lb_stats(struct
    sched_domain *sd,
    SCHED_POWER_SCALE);
    if (!sgs->group_capacity)
    sgs->group_capacity = fix_small_capacity(sd, group);
    +
    + if (sched_smt_power_savings && !(sd->flags & SD_SHARE_CPUPOWER))
    + sgs->group_capacity = group->group_weight;
    +
    sgs->group_weight = group->group_weight;

    if (sgs->group_capacity > sgs->sum_nr_running)

    \
     
     \ /
      Last update: 2012-01-09 17:45    [W:0.029 / U:0.012 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site