lkml.org 
[lkml]   [2018]   [Oct]   [11]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v5 1/2] memory_hotplug: Free pages as higher order
On 2018-10-10 23:03, Michal Hocko wrote:
> On Wed 10-10-18 22:26:41, Arun KS wrote:
>> On 2018-10-10 21:00, Vlastimil Babka wrote:
>> > On 10/5/18 10:10 AM, Arun KS wrote:
>> > > When free pages are done with higher order, time spend on
>> > > coalescing pages by buddy allocator can be reduced. With
>> > > section size of 256MB, hot add latency of a single section
>> > > shows improvement from 50-60 ms to less than 1 ms, hence
>> > > improving the hot add latency by 60%. Modify external
>> > > providers of online callback to align with the change.
>> > >
>> > > Signed-off-by: Arun KS <arunks@codeaurora.org>
>> >
>> > [...]
>> >
>> > > @@ -655,26 +655,44 @@ void __online_page_free(struct page *page)
>> > > }
>> > > EXPORT_SYMBOL_GPL(__online_page_free);
>> > >
>> > > -static void generic_online_page(struct page *page)
>> > > +static int generic_online_page(struct page *page, unsigned int order)
>> > > {
>> > > - __online_page_set_limits(page);
>> >
>> > This is now not called anymore, although the xen/hv variants still do
>> > it. The function seems empty these days, maybe remove it as a followup
>> > cleanup?
>> >
>> > > - __online_page_increment_counters(page);
>> > > - __online_page_free(page);
>> > > + __free_pages_core(page, order);
>> > > + totalram_pages += (1UL << order);
>> > > +#ifdef CONFIG_HIGHMEM
>> > > + if (PageHighMem(page))
>> > > + totalhigh_pages += (1UL << order);
>> > > +#endif
>> >
>> > __online_page_increment_counters() would have used
>> > adjust_managed_page_count() which would do the changes under
>> > managed_page_count_lock. Are we safe without the lock? If yes, there
>> > should perhaps be a comment explaining why.
>>
>> Looks unsafe without managed_page_count_lock.
>
> Why does it matter actually? We cannot online/offline memory in
> parallel. This is not the case for the boot where we initialize memory
> in parallel on multiple nodes. So this seems to be safe currently
> unless
> I am missing something. A comment explaining that would be helpful
> though.

Other main callers of adjust_manage_page_count(),

static inline void free_reserved_page(struct page *page)
{
__free_reserved_page(page);
adjust_managed_page_count(page, 1);
}

static inline void mark_page_reserved(struct page *page)
{
SetPageReserved(page);
adjust_managed_page_count(page, -1);
}

Won't they race with memory hotplug?

Few more,
./drivers/xen/balloon.c:519: adjust_managed_page_count(page,
-1);
./drivers/virtio/virtio_balloon.c:175: adjust_managed_page_count(page,
-1);
./drivers/virtio/virtio_balloon.c:196: adjust_managed_page_count(page,
1);
./mm/hugetlb.c:2158: adjust_managed_page_count(page,
1 << h->order);

Regards,
Arun

\
 
 \ /
  Last update: 2018-10-11 04:30    [W:0.090 / U:0.252 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site