lkml.org 
[lkml]   [2017]   [Jan]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[PATCH 4.8 59/85] mm, page_alloc: keep pcp count and list contents in sync if struct page is corrupted
    Date
    4.8-stable review patch.  If anyone has any objections, please let me know.

    ------------------

    From: Mel Gorman <mgorman@techsingularity.net>

    commit a6de734bc002fe2027ccc074fbbd87d72957b7a4 upstream.

    Vlastimil Babka pointed out that commit 479f854a207c ("mm, page_alloc:
    defer debugging checks of pages allocated from the PCP") will allow the
    per-cpu list counter to be out of sync with the per-cpu list contents if
    a struct page is corrupted.

    The consequence is an infinite loop if the per-cpu lists get fully
    drained by free_pcppages_bulk because all the lists are empty but the
    count is positive. The infinite loop occurs here

    do {
    batch_free++;
    if (++migratetype == MIGRATE_PCPTYPES)
    migratetype = 0;
    list = &pcp->lists[migratetype];
    } while (list_empty(list));

    What the user sees is a bad page warning followed by a soft lockup with
    interrupts disabled in free_pcppages_bulk().

    This patch keeps the accounting in sync.

    Fixes: 479f854a207c ("mm, page_alloc: defer debugging checks of pages allocated from the PCP")
    Link: http://lkml.kernel.org/r/20161202112951.23346-2-mgorman@techsingularity.net
    Signed-off-by: Mel Gorman <mgorman@suse.de>
    Acked-by: Vlastimil Babka <vbabka@suse.cz>
    Acked-by: Michal Hocko <mhocko@suse.com>
    Acked-by: Hillf Danton <hillf.zj@alibaba-inc.com>
    Cc: Christoph Lameter <cl@linux.com>
    Cc: Johannes Weiner <hannes@cmpxchg.org>
    Cc: Jesper Dangaard Brouer <brouer@redhat.com>
    Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
    Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
    Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

    ---
    mm/page_alloc.c | 12 ++++++++++--
    1 file changed, 10 insertions(+), 2 deletions(-)

    --- a/mm/page_alloc.c
    +++ b/mm/page_alloc.c
    @@ -2173,7 +2173,7 @@ static int rmqueue_bulk(struct zone *zon
    unsigned long count, struct list_head *list,
    int migratetype, bool cold)
    {
    - int i;
    + int i, alloced = 0;

    spin_lock(&zone->lock);
    for (i = 0; i < count; ++i) {
    @@ -2198,13 +2198,21 @@ static int rmqueue_bulk(struct zone *zon
    else
    list_add_tail(&page->lru, list);
    list = &page->lru;
    + alloced++;
    if (is_migrate_cma(get_pcppage_migratetype(page)))
    __mod_zone_page_state(zone, NR_FREE_CMA_PAGES,
    -(1 << order));
    }
    +
    + /*
    + * i pages were removed from the buddy list even if some leak due
    + * to check_pcp_refill failing so adjust NR_FREE_PAGES based
    + * on i. Do not confuse with 'alloced' which is the number of
    + * pages added to the pcp list.
    + */
    __mod_zone_page_state(zone, NR_FREE_PAGES, -(i << order));
    spin_unlock(&zone->lock);
    - return i;
    + return alloced;
    }

    #ifdef CONFIG_NUMA

    \
     
     \ /
      Last update: 2017-01-04 21:57    [W:7.006 / U:0.032 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site