lkml.org 
[lkml]   [2018]   [Jan]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH 1/2] mm: Fix memory size alignment in devm_memremap_pages_release()
Date
The functions devm_memremap_pages() and devm_memremap_pages_release() use
different ways to calculate the section-aligned amount of memory. The
latter function may use an incorrect size if the memory region is small
but straddles a section border.

Use the same code for both.

Fixes: 5f29a77cd957 ("mm: fix mixed zone detection in devm_memremap_pages")
Signed-off-by: Jan H. Schönherr <jschoenh@amazon.de>
---
kernel/memremap.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/kernel/memremap.c b/kernel/memremap.c
index 403ab9c..4712ce6 100644
--- a/kernel/memremap.c
+++ b/kernel/memremap.c
@@ -301,7 +301,8 @@ static void devm_memremap_pages_release(struct device *dev, void *data)

/* pages are dead and unused, undo the arch mapping */
align_start = res->start & ~(SECTION_SIZE - 1);
- align_size = ALIGN(resource_size(res), SECTION_SIZE);
+ align_size = ALIGN(res->start + resource_size(res), SECTION_SIZE)
+ - align_start;

mem_hotplug_begin();
arch_remove_memory(align_start, align_size);
--
2.9.3.1.gcba166c.dirty
\
 
 \ /
  Last update: 2018-01-18 01:07    [W:0.074 / U:0.052 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site