lkml.org 
[lkml]   [2015]   [Oct]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH V2] mm, page_alloc: reserve pageblocks for high-order atomic allocations on demand -fix
Date
There is a redundant check and a memory leak introduced by a patch in
mmotm. This patch removes an unlikely(order) check as we are sure order
is not zero at the time. It also checks if a page is already allocated
to avoid a memory leak.

This is a fix to the mmotm patch
mm-page_alloc-reserve-pageblocks-for-high-order-atomic-allocations-on-demand.patch

Signed-off-by: yalin wang <yalin.wang2010@gmail.com>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
---
mm/page_alloc.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 0d6f540..043b691 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -2241,13 +2241,13 @@ struct page *buffered_rmqueue(struct zone *preferred_zone,
spin_lock_irqsave(&zone->lock, flags);

page = NULL;
- if (unlikely(order) && (alloc_flags & ALLOC_HARDER)) {
+ if (alloc_flags & ALLOC_HARDER) {
page = __rmqueue_smallest(zone, order, MIGRATE_HIGHATOMIC);
if (page)
trace_mm_page_alloc_zone_locked(page, order, migratetype);
}
-
- page = __rmqueue(zone, order, migratetype, gfp_flags);
+ if (!page)
+ page = __rmqueue(zone, order, migratetype, gfp_flags);
spin_unlock(&zone->lock);
if (!page)
goto failed;
--
1.9.1


\
 
 \ /
  Last update: 2015-10-13 04:01    [W:0.109 / U:0.704 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site