lkml.org 
[lkml]   [2021]   [Apr]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH] riscv: Only extend kernel reservation if mapped read-only
Date
When the kernel mapping was moved outside of the linear mapping, the
kernel memory reservation was increased, to take into account mapping
granularity. However, this is done unconditionally, regardless of
whether the kernel memory is mapped read-only or not.

If this extension is not needed, up to 2 MiB may be lost, which has a
big impact on e.g. Canaan K210 (64-bit nommu) platforms with only 8 MiB
of RAM.

Reclaim the lost memory by only extending the reserved region when
needed, i.e. matching the conditional logic around the call to
protect_kernel_linear_mapping_text_rodata().

Fixes: 2bfc6cd81bd17e43 ("riscv: Move kernel mapping outside of linear mapping")
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>
---
Only tested on K210 (SiPeed MAIX BiT):

-Memory: 5852K/8192K available (1344K kernel code, 147K rwdata, 272K rodata, 106K init, 72K bss, 2340K reserved, 0K cma-reserved)
+Memory: 5948K/8192K available (1344K kernel code, 147K rwdata, 272K rodata, 106K init, 72K bss, 2244K reserved, 0K cma-reserved)

Yes, I was lucky, as only 96 KiB was lost ;-)
---
arch/riscv/mm/init.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
index 788eb222deacf994..3439783f26abc488 100644
--- a/arch/riscv/mm/init.c
+++ b/arch/riscv/mm/init.c
@@ -136,11 +136,17 @@ void __init setup_bootmem(void)

/*
* Reserve from the start of the kernel to the end of the kernel
- * and make sure we align the reservation on PMD_SIZE since we will
+ */
+#if defined(CONFIG_STRICT_KERNEL_RWX) && defined(CONFIG_64BIT) && \
+ defined(CONFIG_MMU) && !defined(CONFIG_XIP_KERNEL)
+ /*
+ * Make sure we align the reservation on PMD_SIZE since we will
* map the kernel in the linear mapping as read-only: we do not want
* any allocation to happen between _end and the next pmd aligned page.
*/
- memblock_reserve(vmlinux_start, (vmlinux_end - vmlinux_start + PMD_SIZE - 1) & PMD_MASK);
+ vmlinux_end = (vmlinux_end + PMD_SIZE - 1) & PMD_MASK;
+#endif
+ memblock_reserve(vmlinux_start, vmlinux_end - vmlinux_start);

/*
* memblock allocator is not aware of the fact that last 4K bytes of
--
2.25.1
\
 
 \ /
  Last update: 2021-04-28 18:48    [W:0.052 / U:0.720 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site