lkml.org 
[lkml]   [2011]   [Mar]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
From
Subject[PATCH] x86_64: Fix page table building regression
Date

Recently I had cause to enable PAGE_ALLOC_DEBUG and I discovered my
kdump kernel would not boot. After some investigation it turns out that
in commit 80989ce064 "x86: clean up and and print out initial max_pfn_mapped"
that a limitation of the 32bit page table setup was unnecessarily
applied to the 64bit code. The initial 64bit page table setup code is
careful to map in it's initial page table pages and unmap then when done
so they can live anywhere in memory, so we don't need to limit ourselves
to using pages that are already mapped into memory.

In my case I hit this because the first 512M was not usable by the
kdump kernel.

Allocating the page tables higher should improve the reliability of
kdump kernels. As it stands today with the recommended 128M reserved
for a kdump kernel the area reserved for kdump kernels will frequently
be allocated above 512M, and the kdump kernels will only be able to
allocate it's page tables from the low 1M of RAM. Strictly speaking
that memory is available but it is the one piece of memory that we don't
have a 100% guarantee there was not on-going DMA to before the kdump
kernel starts.

Allowing the page tables to not come from the low 512M also will allow
kernels built with DEBUG_PAGE_ALLOC to boot on systems with 256G of RAM.

Cc: stable@kernel.org
Signed-off-by: Eric W. Biederman <ebiederm@aristanetworks.com>
---
arch/x86/mm/init.c | 8 +++++---
1 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index 947f42a..52460a1 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -33,7 +33,7 @@ int direct_gbpages
static void __init find_early_table_space(unsigned long end, int use_pse,
int use_gbpages)
{
- unsigned long puds, pmds, ptes, tables, start;
+ unsigned long puds, pmds, ptes, tables, start, stop;
phys_addr_t base;

puds = (end + PUD_SIZE - 1) >> PUD_SHIFT;
@@ -74,11 +74,13 @@ static void __init find_early_table_space(unsigned long end, int use_pse,
*/
#ifdef CONFIG_X86_32
start = 0x7000;
+ /* The 32bit kernel_physical_mapping_init is limited */
+ stop = max_pfn_mapped<<PAGE_SHIFT;
#else
start = 0x8000;
+ stop = end;
#endif
- base = memblock_find_in_range(start, max_pfn_mapped<<PAGE_SHIFT,
- tables, PAGE_SIZE);
+ base = memblock_find_in_range(start, stop, tables, PAGE_SIZE);
if (base == MEMBLOCK_ERROR)
panic("Cannot find space for the kernel page tables");

--
1.7.4


\
 
 \ /
  Last update: 2011-03-26 07:45    [W:0.036 / U:0.248 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site