lkml.org 
[lkml]   [2016]   [Nov]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    Date
    From
    Subject[tip:sched/urgent] sched/autogroup: Fix 64-bit kernel nice level adjustment
    Commit-ID:  83929cce95251cc77e5659bf493bd424ae0e7a67
    Gitweb: http://git.kernel.org/tip/83929cce95251cc77e5659bf493bd424ae0e7a67
    Author: Mike Galbraith <efault@gmx.de>
    AuthorDate: Wed, 23 Nov 2016 11:33:37 +0100
    Committer: Ingo Molnar <mingo@kernel.org>
    CommitDate: Thu, 24 Nov 2016 05:45:02 +0100

    sched/autogroup: Fix 64-bit kernel nice level adjustment

    Michael Kerrisk reported:

    > Regarding the previous paragraph... My tests indicate
    > that writing *any* value to the autogroup [nice priority level]
    > file causes the task group to get a lower priority.

    Because autogroup didn't call the then meaningless scale_load()...

    Autogroup nice level adjustment has been broken ever since load
    resolution was increased for 64-bit kernels. Use scale_load() to
    scale group weight.

    Michael Kerrisk tested this patch to fix the problem:

    > Applied and tested against 4.9-rc6 on an Intel u7 (4 cores).
    > Test setup:
    >
    > Terminal window 1: running 40 CPU burner jobs
    > Terminal window 2: running 40 CPU burner jobs
    > Terminal window 1: running 1 CPU burner job
    >
    > Demonstrated that:
    > * Writing "0" to the autogroup file for TW1 now causes no change
    > to the rate at which the process on the terminal consume CPU.
    > * Writing -20 to the autogroup file for TW1 caused those processes
    > to get the lion's share of CPU while TW2 TW3 get a tiny amount.
    > * Writing -20 to the autogroup files for TW1 and TW3 allowed the
    > process on TW3 to get as much CPU as it was getting as when
    > the autogroup nice values for both terminals were 0.

    Reported-by: Michael Kerrisk <mtk.manpages@gmail.com>
    Tested-by: Michael Kerrisk <mtk.manpages@gmail.com>
    Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Peter Zijlstra <a.p.zijlstra@chello.nl>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Cc: linux-man <linux-man@vger.kernel.org>
    Cc: stable@vger.kernel.org
    Link: http://lkml.kernel.org/r/1479897217.4306.6.camel@gmx.de
    Signed-off-by: Ingo Molnar <mingo@kernel.org>
    ---
    kernel/sched/auto_group.c | 4 +++-
    1 file changed, 3 insertions(+), 1 deletion(-)

    diff --git a/kernel/sched/auto_group.c b/kernel/sched/auto_group.c
    index f1c8fd5..da39489 100644
    --- a/kernel/sched/auto_group.c
    +++ b/kernel/sched/auto_group.c
    @@ -212,6 +212,7 @@ int proc_sched_autogroup_set_nice(struct task_struct *p, int nice)
    {
    static unsigned long next = INITIAL_JIFFIES;
    struct autogroup *ag;
    + unsigned long shares;
    int err;

    if (nice < MIN_NICE || nice > MAX_NICE)
    @@ -230,9 +231,10 @@ int proc_sched_autogroup_set_nice(struct task_struct *p, int nice)

    next = HZ / 10 + jiffies;
    ag = autogroup_task_get(p);
    + shares = scale_load(sched_prio_to_weight[nice + 20]);

    down_write(&ag->lock);
    - err = sched_group_set_shares(ag->tg, sched_prio_to_weight[nice + 20]);
    + err = sched_group_set_shares(ag->tg, shares);
    if (!err)
    ag->nice = nice;
    up_write(&ag->lock);
    \
     
     \ /
      Last update: 2016-11-24 07:26    [W:7.329 / U:0.052 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site