Messages in this thread |  | | From | Greentime Hu <> | Date | Thu, 1 Aug 2019 11:34:37 +0800 | Subject | Re: [PATCH v4 2/2] RISC-V: Implement sparsemem |
| |
Hi Logan,
Logan Gunthorpe <logang@deltatee.com> 於 2019年8月1日 週四 上午1:08寫道: > > > > On 2019-07-31 12:30 a.m., Greentime Hu wrote: > > I look this issue more closely. > > I found it always sets each memblock region to node 0. Does this make sense? > > I am not sure if I understand this correctly. Do you have any idea for > > this? Thank you. :) > > Yes, I think this is normal. When we talk about memory nodes we're > talking about NUMA nodes which is unrelated to device tree nodes.
Ok, but it seems the second memblock_region may overwrite the first memblock_region in for_each_memblock(memory, reg) loop. It always uses this API to set to node 0. memblock_set_node(PFN_PHYS(start_pfn),PFN_PHYS(end_pfn - start_pfn), &memblock.memory,0)
> I'm not really sure what's causing the crash. Have you verified it's > this patch that causes it? Is it related to there being a hole in your > memory, does it work if you only have one memory node? >
It works fine if there is only one memory node described in dts.
I think it is related to there being a hole in the device tree script. I don't actually have a platform with a hole in the memory region, so I use device tree script to describe it.
The physical address layout will be like this. 2GB-3GB-hole-6GB-7GB
memory@80000000 { device_type = "memory"; reg = <0x0 0x80000000 0x0 0x40000000>; }; memory@180000000 { device_type = "memory"; reg = <0x1 0x80000000 0x0 0x40000000>; };
Thank you for the quick reply. :)
|  |