lkml.org 
[lkml]   [2020]   [Mar]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [PATCH v2] mm/sparse.c: Use kvmalloc_node/kvfree to alloc/free memmap for the classic sparse
On Thu, Mar 12, 2020 at 02:18:26PM +0000, Wei Yang wrote:
> On Thu, Mar 12, 2020 at 06:34:16AM -0700, Matthew Wilcox wrote:
> >On Thu, Mar 12, 2020 at 09:08:22PM +0800, Baoquan He wrote:
> >> This change makes populate_section_memmap()/depopulate_section_memmap
> >> much simpler.
> >>
> >> Suggested-by: Michal Hocko <mhocko@kernel.org>
> >> Signed-off-by: Baoquan He <bhe@redhat.com>
> >> ---
> >> v1->v2:
> >> The old version only used __get_free_pages() to replace alloc_pages()
> >> in populate_section_memmap().
> >> http://lkml.kernel.org/r/20200307084229.28251-8-bhe@redhat.com
> >>
> >> mm/sparse.c | 27 +++------------------------
> >> 1 file changed, 3 insertions(+), 24 deletions(-)
> >>
> >> diff --git a/mm/sparse.c b/mm/sparse.c
> >> index bf6c00a28045..362018e82e22 100644
> >> --- a/mm/sparse.c
> >> +++ b/mm/sparse.c
> >> @@ -734,35 +734,14 @@ static void free_map_bootmem(struct page *memmap)
> >> struct page * __meminit populate_section_memmap(unsigned long pfn,
> >> unsigned long nr_pages, int nid, struct vmem_altmap *altmap)
> >> {
> >> - struct page *page, *ret;
> >> - unsigned long memmap_size = sizeof(struct page) * PAGES_PER_SECTION;
> >> -
> >> - page = alloc_pages(GFP_KERNEL|__GFP_NOWARN, get_order(memmap_size));
> >> - if (page)
> >> - goto got_map_page;
> >> -
> >> - ret = vmalloc(memmap_size);
> >> - if (ret)
> >> - goto got_map_ptr;
> >> -
> >> - return NULL;
> >> -got_map_page:
> >> - ret = (struct page *)pfn_to_kaddr(page_to_pfn(page));
> >> -got_map_ptr:
> >> -
> >> - return ret;
> >> + return kvmalloc_node(sizeof(struct page) * PAGES_PER_SECTION,
> >> + GFP_KERNEL|__GFP_NOWARN, nid);
> >
> >Use of NOWARN here is inappropriate, because there's no fallback.
>
> Hmm... this replacement is a little tricky.
>
> When you look into kvmalloc_node(), it will do the fallback if the size is
> bigger than PAGE_SIZE. This means the change here may not be equivalent as
> before if memmap_size is less than PAGE_SIZE.
>
> For example if :
> PAGE_SIZE = 64K
> SECTION_SIZE = 128M
>
> would lead to memmap_size = 2K, which is less than PAGE_SIZE.

Yes, I thought about that. I decided it wasn't a problem, as long as
the struct page remains aligned, and we now have a guarantee that allocations
above 512 bytes in size are aligned. With a 64 byte struct page, as long
as we're allocating at least 8 pages, we know it'll be naturally aligned.

Your calculation doesn't take into account the size of struct page.
128M / 64k is indeed 2k, but you forgot to multiply by 64, which takes
us to 128kB.

\
 
 \ /
  Last update: 2020-03-12 15:26    [W:0.141 / U:8.832 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site