lkml.org 
[lkml]   [2008]   [Jul]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 1/2] sched: remove extraneous load manipulations
    Date
    commit 62fb185130e4d420f71a30ff59d8b16b74ef5d2b reverted some patches
    in the scheduler, but it looks like it may have left a few redundant
    calls to inc_load/dec_load remain in set_user_nice (since the
    dequeue_task/enqueue_task take care of the load. This could result
    in the load values being off since the load may change while dequeued.

    Signed-off-by: Gregory Haskins <ghaskins@novell.com>
    CC: Peter Zijlstra <peterz@infradead.org>
    CC: Ingo Molnar <mingo@elte.hu>
    ---

    kernel/sched.c | 6 ++----
    1 files changed, 2 insertions(+), 4 deletions(-)

    diff --git a/kernel/sched.c b/kernel/sched.c
    index 31f91d9..b046754 100644
    --- a/kernel/sched.c
    +++ b/kernel/sched.c
    @@ -4679,10 +4679,8 @@ void set_user_nice(struct task_struct *p, long nice)
    goto out_unlock;
    }
    on_rq = p->se.on_rq;
    - if (on_rq) {
    + if (on_rq)
    dequeue_task(rq, p, 0);
    - dec_load(rq, p);
    - }

    p->static_prio = NICE_TO_PRIO(nice);
    set_load_weight(p);
    @@ -4692,7 +4690,7 @@ void set_user_nice(struct task_struct *p, long nice)

    if (on_rq) {
    enqueue_task(rq, p, 0);
    - inc_load(rq, p);
    +
    /*
    * If the task increased its priority or is running and
    * lowered its priority, then reschedule its CPU:


    \
     
     \ /
      Last update: 2008-07-03 23:55    [W:0.020 / U:390.868 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site