lkml.org 
[lkml]   [2016]   [Jun]   [9]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 4.2.y-ckt 093/206] sched/loadavg: Fix loadavg artifacts on fully idle and on fully loaded systems
    Date
    4.2.8-ckt12 -stable review patch.  If anyone has any objections, please let me know.

    ---8<------------------------------------------------------------

    From: Vik Heyndrickx <vik.heyndrickx@veribox.net>

    commit 20878232c52329f92423d27a60e48b6a6389e0dd upstream.

    Systems show a minimal load average of 0.00, 0.01, 0.05 even when they
    have no load at all.

    Uptime and /proc/loadavg on all systems with kernels released during the
    last five years up until kernel version 4.6-rc5, show a 5- and 15-minute
    minimum loadavg of 0.01 and 0.05 respectively. This should be 0.00 on
    idle systems, but the way the kernel calculates this value prevents it
    from getting lower than the mentioned values.

    Likewise but not as obviously noticeable, a fully loaded system with no
    processes waiting, shows a maximum 1/5/15 loadavg of 1.00, 0.99, 0.95
    (multiplied by number of cores).

    Once the (old) load becomes 93 or higher, it mathematically can never
    get lower than 93, even when the active (load) remains 0 forever.
    This results in the strange 0.00, 0.01, 0.05 uptime values on idle
    systems. Note: 93/2048 = 0.0454..., which rounds up to 0.05.

    It is not correct to add a 0.5 rounding (=1024/2048) here, since the
    result from this function is fed back into the next iteration again,
    so the result of that +0.5 rounding value then gets multiplied by
    (2048-2037), and then rounded again, so there is a virtual "ghost"
    load created, next to the old and active load terms.

    By changing the way the internally kept value is rounded, that internal
    value equivalent now can reach 0.00 on idle, and 1.00 on full load. Upon
    increasing load, the internally kept load value is rounded up, when the
    load is decreasing, the load value is rounded down.

    The modified code was tested on nohz=off and nohz kernels. It was tested
    on vanilla kernel 4.6-rc5 and on centos 7.1 kernel 3.10.0-327. It was
    tested on single, dual, and octal cores system. It was tested on virtual
    hosts and bare hardware. No unwanted effects have been observed, and the
    problems that the patch intended to fix were indeed gone.

    Tested-by: Damien Wyart <damien.wyart@free.fr>
    Signed-off-by: Vik Heyndrickx <vik.heyndrickx@veribox.net>
    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Cc: Doug Smythies <dsmythies@telus.net>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Mike Galbraith <efault@gmx.de>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Fixes: 0f004f5a696a ("sched: Cure more NO_HZ load average woes")
    Link: http://lkml.kernel.org/r/e8d32bff-d544-7748-72b5-3c86cc71f09f@veribox.net
    Signed-off-by: Ingo Molnar <mingo@kernel.org>
    Signed-off-by: Kamal Mostafa <kamal@canonical.com>
    ---
    kernel/sched/loadavg.c | 11 +++++++----
    1 file changed, 7 insertions(+), 4 deletions(-)

    diff --git a/kernel/sched/loadavg.c b/kernel/sched/loadavg.c
    index ef71590..b0b93fd 100644
    --- a/kernel/sched/loadavg.c
    +++ b/kernel/sched/loadavg.c
    @@ -99,10 +99,13 @@ long calc_load_fold_active(struct rq *this_rq)
    static unsigned long
    calc_load(unsigned long load, unsigned long exp, unsigned long active)
    {
    - load *= exp;
    - load += active * (FIXED_1 - exp);
    - load += 1UL << (FSHIFT - 1);
    - return load >> FSHIFT;
    + unsigned long newload;
    +
    + newload = load * exp + active * (FIXED_1 - exp);
    + if (active >= load)
    + newload += FIXED_1-1;
    +
    + return newload / FIXED_1;
    }

    #ifdef CONFIG_NO_HZ_COMMON
    --
    2.7.4
    \
     
     \ /
      Last update: 2016-06-10 00:21    [W:4.570 / U:0.024 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site