lkml.org 
[lkml]   [2018]   [Feb]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [v4 6/6] mm/memory_hotplug: optimize memory hotplug

* Pavel Tatashin <pasha.tatashin@oracle.com> wrote:

> During memory hotplugging we traverse struct pages three times:
>
> 1. memset(0) in sparse_add_one_section()
> 2. loop in __add_section() to set do: set_page_node(page, nid); and
> SetPageReserved(page);
> 3. loop in memmap_init_zone() to call __init_single_pfn()
>
> This patch remove the first two loops, and leaves only loop 3. All struct
> pages are initialized in one place, the same as it is done during boot.

s/remove
/removes

> The benefits:
> - We improve the memory hotplug performance because we are not evicting
> cache several times and also reduce loop branching overheads.

s/We improve the memory hotplug performance
/We improve memory hotplug performance

s/not evicting cache several times
/not evicting the cache several times

s/overheads
/overhead

> - Remove condition from hotpath in __init_single_pfn(), that was added in
> order to fix the problem that was reported by Bharata in the above email
> thread, thus also improve the performance during normal boot.

s/improve the performance
/improve performance

> - Make memory hotplug more similar to boot memory initialization path
> because we zero and initialize struct pages only in one function.

s/more similar to boot memory initialization path
/more similar to the boot memory initialization path

> - Simplifies memory hotplug strut page initialization code, and thus
> enables future improvements, such as multi-threading the initialization
> of struct pages in order to improve the hotplug performance even further
> on larger machines.

s/strut
/struct

s/to improve the hotplug performance even further
/to improve hotplug performance even further

> @@ -260,21 +260,12 @@ static int __meminit __add_section(int nid, unsigned long phys_start_pfn,
> return ret;
>
> /*
> - * Make all the pages reserved so that nobody will stumble over half
> - * initialized state.
> - * FIXME: We also have to associate it with a node because page_to_nid
> - * relies on having page with the proper node.
> + * The first page in every section holds node id, this is because we
> + * will need it in online_pages().

s/holds node id
/holds the node id

> +#ifdef CONFIG_DEBUG_VM
> + /*
> + * poison uninitialized struct pages in order to catch invalid flags
> + * combinations.

Please capitalize sentences properly.

> + */
> + memset(memmap, PAGE_POISON_PATTERN,
> + sizeof(struct page) * PAGES_PER_SECTION);
> +#endif

I'd suggest writing this into a single line:

memset(memmap, PAGE_POISON_PATTERN, sizeof(struct page)*PAGES_PER_SECTION);

(And ignore any checkpatch whinging - the line break didn't make it more
readable.)

With those details fixed, and assuming that this patch was tested:

Reviewed-by: Ingo Molnar <mingo@kernel.org>

Thanks,

Ingo

\
 
 \ /
  Last update: 2018-02-16 10:30    [W:0.108 / U:0.332 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site