lkml.org 
[lkml]   [2016]   [Nov]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[HMM v13 07/18] mm/ZONE_DEVICE/x86: add support for un-addressable device memory
Date
It does not need much, just skip populating kernel linear mapping
for range of un-addressable device memory (it is pick so that there
is no physical memory resource overlapping it). All the logic is in
share mm code.

Only support x86-64 as this feature doesn't make much sense with
constrained virtual address space of 32bits architecture.

Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
---
arch/x86/mm/init_64.c | 28 ++++++++++++++--------------
1 file changed, 14 insertions(+), 14 deletions(-)

diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 8c4abb0..556f7bb 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -661,13 +661,17 @@ int arch_add_memory(int nid, u64 start, u64 size, int flags)
unsigned long nr_pages = size >> PAGE_SHIFT;
int ret;

- /* Need to add support for device and unaddressable memory if needed */
- if (flags & MEMORY_UNADDRESSABLE) {
- BUG();
- return -EINVAL;
- }
-
- init_memory_mapping(start, start + size);
+ /*
+ * We get un-addressable memory when some one is adding a ZONE_DEVICE
+ * to have struct page for a device memory which is not accessible by
+ * the CPU so it is pointless to have a linear kernel mapping of such
+ * memory.
+ *
+ * Core mm should make sure it never set a pte pointing to such fake
+ * physical range.
+ */
+ if (!(flags & MEMORY_UNADDRESSABLE))
+ init_memory_mapping(start, start + size);

ret = __add_pages(nid, zone, start_pfn, nr_pages);
WARN_ON_ONCE(ret);
@@ -972,12 +976,6 @@ int __ref arch_remove_memory(u64 start, u64 size, int flags)
struct zone *zone;
int ret;

- /* Need to add support for device and unaddressable memory if needed */
- if (flags & MEMORY_UNADDRESSABLE) {
- BUG();
- return -EINVAL;
- }
-
/* With altmap the first mapped page is offset from @start */
altmap = to_vmem_altmap((unsigned long) page);
if (altmap)
@@ -985,7 +983,9 @@ int __ref arch_remove_memory(u64 start, u64 size, int flags)
zone = page_zone(page);
ret = __remove_pages(zone, start_pfn, nr_pages);
WARN_ON_ONCE(ret);
- kernel_physical_mapping_remove(start, start + size);
+
+ if (!(flags & MEMORY_UNADDRESSABLE))
+ kernel_physical_mapping_remove(start, start + size);

return ret;
}
--
2.4.3
\
 
 \ /
  Last update: 2016-11-18 18:18    [W:0.317 / U:0.048 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site