lkml.org 
[lkml]   [2020]   [Oct]   [7]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v12 0/9] support reserving crashkernel above 4G on arm64 kdump
On Wed, Oct 07, 2020 at 12:37:49PM +0530, Bhupesh Sharma wrote:
> On Tue, Oct 6, 2020 at 11:30 PM Catalin Marinas <catalin.marinas@arm.com> wrote:
> > On Mon, Oct 05, 2020 at 11:12:10PM +0530, Bhupesh Sharma wrote:
> > > I think my earlier email with the test results on this series bounced
> > > off the mailing list server (for some weird reason), but I still see
> > > several issues with this patchset. I will add specific issues in the
> > > review comments for each patch again, but overall, with a crashkernel
> > > size of say 786M, I see the following issue:
> > >
> > > # cat /proc/cmdline
> > > BOOT_IMAGE=(hd7,gpt2)/vmlinuz-5.9.0-rc7+ root=<..snip..> rd.lvm.lv=<..snip..> crashkernel=786M
> > >
> > > I see two regions of size 786M and 256M reserved in low and high
> > > regions respectively, So we reserve a total of 1042M of memory, which
> > > is an incorrect behaviour:
> > >
> > > # dmesg | grep -i crash
> > > [ 0.000000] Reserving 256MB of low memory at 2816MB for crashkernel (System low RAM: 768MB)
> > > [ 0.000000] Reserving 786MB of memory at 654158MB for crashkernel (System RAM: 130816MB)
> > > [ 0.000000] Kernel command line: BOOT_IMAGE=(hd2,gpt2)/vmlinuz-5.9.0-rc7+ root=/dev/mapper/rhel_ampere--hr330a--03-root ro rd.lvm.lv=rhel_ampere-hr330a-03/root rd.lvm.lv=rhel_ampere-hr330a-03/swap crashkernel=786M cma=1024M
> > >
> > > # cat /proc/iomem | grep -i crash
> > > b0000000-bfffffff : Crash kernel (low)
> > > bfcbe00000-bffcffffff : Crash kernel
> >
> > As Chen said, that's the intended behaviour and how x86 works. The
> > requested 768M goes in the high range if there's not enough low memory
> > and an additional buffer for swiotlb is allocated, hence the low 256M.
>
> I understand, but why 256M (as low) for arm64? x86_64 setups usually
> have more system memory available as compared to several commercially
> available arm64 setups. So is the intent, just to keep the behavior
> similar between arm64 and x86_64?

Similar in the sense of the fallback to high memory and some low memory
allocation but the amounts can vary per architecture.

> Should we have a CONFIG option / bootarg to help one select the max
> 'low_size'? Currently the ' low_size' value is calculated as:
>
> /*
> * two parts from kernel/dma/swiotlb.c:
> * -swiotlb size: user-specified with swiotlb= or default.
> *
> * -swiotlb overflow buffer: now hardcoded to 32k. We round it
> * to 8M for other buffers that may need to stay low too. Also
> * make sure we allocate enough extra low memory so that we
> * don't run out of DMA buffers for 32-bit devices.
> */
> low_size = max(swiotlb_size_or_default() + (8UL << 20), 256UL << 20);
>
> Since many arm64 boards ship with swiotlb=0 (turned off) via kernel
> bootargs, the low_size, still ends up being 256M in such cases,
> whereas this 256M can be used for some other purposes - so should we
> be limiting this to 64M and failing the crash kernel allocation
> request (gracefully) otherwise?

I think it makes sense to set a low_size = 0 if
swiotlb_size_or_default() is 0. The assumption would be that if the main
kernel doesn't need an swiotlb, the crashdump one wouldn't need it
either. But this probably needs the ZONE_DMA for non-RPi4 platforms
addressed as well (expanded to the whole ZONE_DMA32).

--
Catalin

\
 
 \ /
  Last update: 2020-10-07 18:33    [W:0.390 / U:0.016 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site