lkml.org 
[lkml]   [2018]   [Apr]   [30]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH RCFv2 1/7] mm: introduce and use PageOffline()
From
Date
Hi Dave,

A few comments below:

> + for (i = 0; i < PAGES_PER_SECTION; i++) {

Performance wise, this is unfortunate that we have to add this loop for every hot-plug. But, I do like the finer hot-plug granularity that you achieve, and do not have a better suggestion how to avoid this loop. What I also like, is that you call init_single_page() only one time.

> + unsigned long pfn = phys_start_pfn + i;
> + struct page *page;
> + if (!pfn_valid(pfn))
> + continue;
> + page = pfn_to_page(pfn);
> +
> + /* dummy zone, the actual one will be set when onlining pages */
> + init_single_page(page, pfn, ZONE_NORMAL, nid);

Is there a reason to use ZONE_NORMAL as a dummy zone? May be define some non-existent zone-id for that? I.e. __MAX_NR_ZONES? That might trigger some debugging checks of course..

In init_single_page() if WANT_PAGE_VIRTUAL is defined it is used to set virtual address. Which is broken if we do not belong to ZONE_NORMAL.

1186 if (!is_highmem_idx(zone))
1187 set_page_address(page, __va(pfn << PAGE_SHIFT));

Otherwise, if you want to keep ZONE_NORMAL here, you could add a new function:

#ifdef WANT_PAGE_VIRTUAL
static void set_page_virtual(struct page *page, and enum zone_type zone)
{
/* The shift won't overflow because ZONE_NORMAL is below 4G. */
if (!is_highmem_idx(zone))
set_page_address(page, __va(pfn << PAGE_SHIFT));
}
#else
static inline void set_page_virtual(struct page *page, and enum zone_type zone)
{}
#endif

And call it from init_single_page(), and from __meminit memmap_init_zone() in "context == MEMMAP_HOTPLUG" if case.

>
> -static void __meminit __init_single_page(struct page *page, unsigned long pfn,
> +extern void __meminit init_single_page(struct page *page, unsigned long pfn,

I've seen it in other places, but what is the point of having "extern" function in .c file?


> #ifdef CONFIG_MEMORY_HOTREMOVE
> -/* Mark all memory sections within the pfn range as online */
> +static bool all_pages_in_section_offline(unsigned long section_nr)
> +{
> + unsigned long pfn = section_nr_to_pfn(section_nr);
> + struct page *page;
> + int i;
> +
> + for (i = 0; i < PAGES_PER_SECTION; i++, pfn++) {
> + if (!pfn_valid(pfn))
> + continue;
> +
> + page = pfn_to_page(pfn);
> + if (!PageOffline(page))
> + return false;
> + }
> + return true;
> +}

Perhaps we could use some counter to keep track of number of subsections that are currently offlined? If section covers 128M of memory, and offline/online is 4M granularity, there are up-to 32 subsections in a section, and thus we need 5-bits to count them. I'm not sure if there is a space in mem_section for this counter. But, that would eliminate the loop above.

Thank you,
Pavel

\
 
 \ /
  Last update: 2018-04-30 16:38    [W:0.060 / U:0.468 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site