lkml.org 
[lkml]   [2024]   [Apr]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: riscv32 EXT4 splat, 6.8 regression?
On 2024-04-16 Mike Rapoport wrote:
> On Tue, Apr 16, 2024 at 06:00:29PM +0100, Matthew Wilcox wrote:
> > On Tue, Apr 16, 2024 at 07:31:54PM +0300, Mike Rapoport wrote:
> > > > - if (!IS_ENABLED(CONFIG_64BIT)) {
> > > > - max_mapped_addr = __pa(~(ulong)0);
> > > > - if (max_mapped_addr == (phys_ram_end - 1))
> > > > - memblock_set_current_limit(max_mapped_addr - 4096);
> > > > - }
> > > > + memblock_reserve(__pa(-PAGE_SIZE), PAGE_SIZE);
> > >
> > > Ack.
> >
> > Can this go to generic code instead of letting architecture maintainers
> > fall over it?
>
> Yes, it's just have to happen before setup_arch() where most architectures
> enable memblock allocations.

This also works, the reported problem disappears.

However, I am confused about one thing: doesn't this make one page of
physical memory inaccessible?

Is it better to solve this by setting max_low_pfn instead? Then at
least the page is still accessible as high memory.

Best regards,
Nam

diff --git a/arch/riscv/mm/init.c b/arch/riscv/mm/init.c
index fa34cf55037b..6e3130cae675 100644
--- a/arch/riscv/mm/init.c
+++ b/arch/riscv/mm/init.c
@@ -197,7 +197,6 @@ early_param("mem", early_mem);
static void __init setup_bootmem(void)
{
phys_addr_t vmlinux_end = __pa_symbol(&_end);
- phys_addr_t max_mapped_addr;
phys_addr_t phys_ram_end, vmlinux_start;

if (IS_ENABLED(CONFIG_XIP_KERNEL))
@@ -235,23 +234,9 @@ static void __init setup_bootmem(void)
if (IS_ENABLED(CONFIG_64BIT))
kernel_map.va_pa_offset = PAGE_OFFSET - phys_ram_base;

- /*
- * memblock allocator is not aware of the fact that last 4K bytes of
- * the addressable memory can not be mapped because of IS_ERR_VALUE
- * macro. Make sure that last 4k bytes are not usable by memblock
- * if end of dram is equal to maximum addressable memory. For 64-bit
- * kernel, this problem can't happen here as the end of the virtual
- * address space is occupied by the kernel mapping then this check must
- * be done as soon as the kernel mapping base address is determined.
- */
- if (!IS_ENABLED(CONFIG_64BIT)) {
- max_mapped_addr = __pa(~(ulong)0);
- if (max_mapped_addr == (phys_ram_end - 1))
- memblock_set_current_limit(max_mapped_addr - 4096);
- }
-
min_low_pfn = PFN_UP(phys_ram_base);
- max_low_pfn = max_pfn = PFN_DOWN(phys_ram_end);
+ max_pfn = PFN_DOWN(phys_ram_end);
+ max_low_pfn = min(max_pfn, PFN_DOWN(__pa(-PAGE_SIZE)));
high_memory = (void *)(__va(PFN_PHYS(max_low_pfn)));

dma32_phys_limit = min(4UL * SZ_1G, (unsigned long)PFN_PHYS(max_low_pfn));
\
 
 \ /
  Last update: 2024-04-17 00:37    [W:0.101 / U:0.040 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site