lkml.org 
[lkml]   [2008]   [Jun]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    Date
    From
    Subject[PATCH 27/30] sched: fix mult overflow
    From: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>

    It was observed these mults can overflow.

    Signed-off-by: Srivatsa Vaddagiri <vatsa@linux.vnet.ibm.com>
    Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
    ---
    kernel/sched_fair.c | 8 ++++----
    1 file changed, 4 insertions(+), 4 deletions(-)

    Index: linux-2.6/kernel/sched_fair.c
    ===================================================================
    --- linux-2.6.orig/kernel/sched_fair.c
    +++ linux-2.6/kernel/sched_fair.c
    @@ -1518,7 +1518,7 @@ load_balance_fair(struct rq *this_rq, in
    struct cfs_rq *busiest_cfs_rq = tg->cfs_rq[busiest_cpu];
    unsigned long busiest_h_load = busiest_cfs_rq->h_load;
    unsigned long busiest_weight = busiest_cfs_rq->load.weight;
    - long rem_load, moved_load;
    + u64 rem_load, moved_load;

    /*
    * empty group
    @@ -1526,8 +1526,8 @@ load_balance_fair(struct rq *this_rq, in
    if (!busiest_cfs_rq->task_weight)
    continue;

    - rem_load = rem_load_move * busiest_weight;
    - rem_load /= busiest_h_load + 1;
    + rem_load = (u64)rem_load_move * busiest_weight;
    + rem_load = div_u64(rem_load, busiest_h_load + 1);

    moved_load = __load_balance_fair(this_rq, this_cpu, busiest,
    rem_load, sd, idle, all_pinned, this_best_prio,
    @@ -1537,7 +1537,7 @@ load_balance_fair(struct rq *this_rq, in
    continue;

    moved_load *= busiest_h_load;
    - moved_load /= busiest_weight + 1;
    + moved_load = div_u64(moved_load, busiest_weight + 1);

    rem_load_move -= moved_load;
    if (rem_load_move < 0)
    --



    \
     
     \ /
      Last update: 2008-06-27 14:25    [W:0.024 / U:32.784 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site