lkml.org 
[lkml]   [2020]   [Jul]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH] mm/sparse: only sub-section aligned range would be populated
From
Date
On 01.07.20 04:32, Wei Yang wrote:
> On Tue, Jun 30, 2020 at 02:52:35PM +0200, David Hildenbrand wrote:
>> On 30.06.20 04:14, Wei Yang wrote:
>>> There are two code path which invoke __populate_section_memmap()
>>>
>>> * sparse_init_nid()
>>> * sparse_add_section()
>>>
>>> For both case, we are sure the memory range is sub-section aligned.
>>>
>>> * we pass PAGES_PER_SECTION to sparse_init_nid()
>>> * we check range by check_pfn_span() before calling
>>> sparse_add_section()
>>>
>>> Also, the counterpart of __populate_section_memmap(), we don't do such
>>> calculation and check since the range is checked by check_pfn_span() in
>>> __remove_pages().
>>>
>>> Clear the calculation and check to keep it simple and comply with its
>>> counterpart.
>>>
>>> Signed-off-by: Wei Yang <richard.weiyang@linux.alibaba.com>
>>> ---
>>> mm/sparse-vmemmap.c | 16 ++--------------
>>> 1 file changed, 2 insertions(+), 14 deletions(-)
>>>
>>> diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
>>> index 0db7738d76e9..24b01ebae111 100644
>>> --- a/mm/sparse-vmemmap.c
>>> +++ b/mm/sparse-vmemmap.c
>>> @@ -247,20 +247,8 @@ int __meminit vmemmap_populate_basepages(unsigned long start,
>>> struct page * __meminit __populate_section_memmap(unsigned long pfn,
>>> unsigned long nr_pages, int nid, struct vmem_altmap *altmap)
>>> {
>>> - unsigned long start;
>>> - unsigned long end;
>>> -
>>> - /*
>>> - * The minimum granularity of memmap extensions is
>>> - * PAGES_PER_SUBSECTION as allocations are tracked in the
>>> - * 'subsection_map' bitmap of the section.
>>> - */
>>> - end = ALIGN(pfn + nr_pages, PAGES_PER_SUBSECTION);
>>> - pfn &= PAGE_SUBSECTION_MASK;
>>> - nr_pages = end - pfn;
>>> -
>>> - start = (unsigned long) pfn_to_page(pfn);
>>> - end = start + nr_pages * sizeof(struct page);
>>> + unsigned long start = (unsigned long) pfn_to_page(pfn);
>>> + unsigned long end = start + nr_pages * sizeof(struct page);
>>>
>>> if (vmemmap_populate(start, end, nid, altmap))
>>> return NULL;
>>>
>>
>> Can we add a WARN_ON_ONCE to catch mis-use in the future?
>>
>> if (WARN_ON_ONCE(!IS_ALIGNED(pfn, PAGES_PER_SUBSECTION) ||
>> !IS_ALIGNED(nr_pages, PAGES_PER_SUBSECTION))
>> return NULL;
>
> How about to add this into both population and depopulation?

We don't have a similar wrapper for vmemmap_free().

--
Thanks,

David / dhildenb

\
 
 \ /
  Last update: 2020-07-01 11:28    [W:0.058 / U:2.712 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site