lkml.org 
[lkml]   [2019]   [Aug]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH v3 06/11] mm/memory_hotplug: Fix crashes in shrink_zone_span()
Date
We can currently crash in shrink_zone_span() in case we access an
uninitialized memmap (via page_to_nid()). Root issue is that we cannot
always identify which memmap was actually initialized.

Let's improve the situation by looking only at online PFNs for
!ZONE_DEVICE memory. This is now very reliable - similar to
set_zone_contiguous(). (Side note: set_zone_contiguous() will never
succeed on ZONE_DEVICE memory right now as we have no online PFNs ...).

For ZONE_DEVICE memory, make sure we don't crash by special-casing
poisoned pages and always checking that the NID has a sane value. We
might still read garbage and get false positives, but it certainly
improves the situation.

Note: Especially subsections make it very hard to detect which parts of
a ZONE_DEVICE memmap were actually initialized - otherwise we could just
have reused SECTION_IS_ONLINE. This needs more thought.

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Oscar Salvador <osalvador@suse.de>
Cc: David Hildenbrand <david@redhat.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Pavel Tatashin <pasha.tatashin@soleen.com>
Cc: Dan Williams <dan.j.williams@intel.com>
Cc: Wei Yang <richardw.yang@linux.intel.com>
Reported-by: Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
Signed-off-by: David Hildenbrand <david@redhat.com>
---
mm/memory_hotplug.c | 22 ++++++++++++++++++++++
1 file changed, 22 insertions(+)

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 663853bf97ed..65b3fdf7f838 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -334,6 +334,17 @@ static unsigned long find_smallest_section_pfn(int nid, struct zone *zone,
if (unlikely(!pfn_valid(start_pfn)))
continue;

+ /*
+ * TODO: There is no way we can identify whether the memmap
+ * of ZONE_DEVICE memory was initialized. We might get
+ * false positives when reading garbage.
+ */
+ if (zone_idx(zone) == ZONE_DEVICE) {
+ if (PagePoisoned(pfn_to_page(start_pfn)))
+ continue;
+ } else if (!pfn_to_online_page(start_pfn))
+ continue;
+
if (unlikely(pfn_to_nid(start_pfn) != nid))
continue;

@@ -359,6 +370,17 @@ static unsigned long find_biggest_section_pfn(int nid, struct zone *zone,
if (unlikely(!pfn_valid(pfn)))
continue;

+ /*
+ * TODO: There is no way we can identify whether the memmap
+ * of ZONE_DEVICE memory was initialized. We might get
+ * false positives when reading garbage.
+ */
+ if (zone_idx(zone) == ZONE_DEVICE) {
+ if (PagePoisoned(pfn_to_page(pfn)))
+ continue;
+ } else if (!pfn_to_online_page(pfn))
+ continue;
+
if (unlikely(pfn_to_nid(pfn) != nid))
continue;

--
2.21.0
\
 
 \ /
  Last update: 2019-08-29 09:05    [W:0.034 / U:0.336 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site