lkml.org 
[lkml]   [2019]   [Apr]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 4.4 095/168] sched/fair: Do not re-read ->h_load_next during hierarchical load calculation
    Date
    From: Mel Gorman <mgorman@techsingularity.net>

    commit 0e9f02450da07fc7b1346c8c32c771555173e397 upstream.

    A NULL pointer dereference bug was reported on a distribution kernel but
    the same issue should be present on mainline kernel. It occured on s390
    but should not be arch-specific. A partial oops looks like:

    Unable to handle kernel pointer dereference in virtual kernel address space
    ...
    Call Trace:
    ...
    try_to_wake_up+0xfc/0x450
    vhost_poll_wakeup+0x3a/0x50 [vhost]
    __wake_up_common+0xbc/0x178
    __wake_up_common_lock+0x9e/0x160
    __wake_up_sync_key+0x4e/0x60
    sock_def_readable+0x5e/0x98

    The bug hits any time between 1 hour to 3 days. The dereference occurs
    in update_cfs_rq_h_load when accumulating h_load. The problem is that
    cfq_rq->h_load_next is not protected by any locking and can be updated
    by parallel calls to task_h_load. Depending on the compiler, code may be
    generated that re-reads cfq_rq->h_load_next after the check for NULL and
    then oops when reading se->avg.load_avg. The dissassembly showed that it
    was possible to reread h_load_next after the check for NULL.

    While this does not appear to be an issue for later compilers, it's still
    an accident if the correct code is generated. Full locking in this path
    would have high overhead so this patch uses READ_ONCE to read h_load_next
    only once and check for NULL before dereferencing. It was confirmed that
    there were no further oops after 10 days of testing.

    As Peter pointed out, it is also necessary to use WRITE_ONCE() to avoid any
    potential problems with store tearing.

    Signed-off-by: Mel Gorman <mgorman@techsingularity.net>
    Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
    Reviewed-by: Valentin Schneider <valentin.schneider@arm.com>
    Cc: Linus Torvalds <torvalds@linux-foundation.org>
    Cc: Mike Galbraith <efault@gmx.de>
    Cc: Peter Zijlstra <peterz@infradead.org>
    Cc: Thomas Gleixner <tglx@linutronix.de>
    Cc: <stable@vger.kernel.org>
    Fixes: 685207963be9 ("sched: Move h_load calculation to task_h_load()")
    Link: https://lkml.kernel.org/r/20190319123610.nsivgf3mjbjjesxb@techsingularity.net
    Signed-off-by: Ingo Molnar <mingo@kernel.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

    ---
    kernel/sched/fair.c | 6 +++---
    1 file changed, 3 insertions(+), 3 deletions(-)

    --- a/kernel/sched/fair.c
    +++ b/kernel/sched/fair.c
    @@ -6022,10 +6022,10 @@ static void update_cfs_rq_h_load(struct
    if (cfs_rq->last_h_load_update == now)
    return;

    - cfs_rq->h_load_next = NULL;
    + WRITE_ONCE(cfs_rq->h_load_next, NULL);
    for_each_sched_entity(se) {
    cfs_rq = cfs_rq_of(se);
    - cfs_rq->h_load_next = se;
    + WRITE_ONCE(cfs_rq->h_load_next, se);
    if (cfs_rq->last_h_load_update == now)
    break;
    }
    @@ -6035,7 +6035,7 @@ static void update_cfs_rq_h_load(struct
    cfs_rq->last_h_load_update = now;
    }

    - while ((se = cfs_rq->h_load_next) != NULL) {
    + while ((se = READ_ONCE(cfs_rq->h_load_next)) != NULL) {
    load = cfs_rq->h_load;
    load = div64_ul(load * se->avg.load_avg,
    cfs_rq_load_avg(cfs_rq) + 1);

    \
     
     \ /
      Last update: 2019-04-24 20:04    [W:3.476 / U:0.464 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site