Messages in this thread | | | Subject | Re: [PATCH V2 2/2] arm64/mm: Enable memory hot remove | From | David Hildenbrand <> | Date | Mon, 15 Apr 2019 15:55:41 +0200 |
| |
> + > +#ifdef CONFIG_MEMORY_HOTREMOVE > +int arch_remove_memory(int nid, u64 start, u64 size, struct vmem_altmap *altmap) > +{ > + unsigned long start_pfn = start >> PAGE_SHIFT; > + unsigned long nr_pages = size >> PAGE_SHIFT; > + struct zone *zone = page_zone(pfn_to_page(start_pfn)); > + int ret; > + > + ret = __remove_pages(zone, start_pfn, nr_pages, altmap); > + if (!ret)
Please note that I posted patches that remove all error handling from arch_remove_memory and __remove_pages(). They are already in next/master
So this gets a lot simpler and more predictable.
Author: David Hildenbrand <david@redhat.com> Date: Wed Apr 10 11:02:27 2019 +1000
mm/memory_hotplug: make __remove_pages() and arch_remove_memory() never fail
All callers of arch_remove_memory() ignore errors. And we should really try to remove any errors from the memory removal path. No more errors are reported from __remove_pages(). BUG() in s390x code in case arch_remove_memory() is triggered. We may implement that properly later. WARN in case powerpc code failed to remove the section mapping, which is better than ignoring the error completely right now.
> + __remove_pgd_mapping(swapper_pg_dir, > + __phys_to_virt(start), size); > + return ret; > +} > +#endif > #endif >
--
Thanks,
David / dhildenb
| |