lkml.org 
[lkml]   [2013]   [May]   [8]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH v5, part3 11/15] mm: use a dedicated lock to protect totalram_pages and zone->managed_pages
On 05/08/2013 11:27 PM, Rik van Riel wrote:
> On 05/08/2013 11:17 AM, Jiang Liu wrote:
>
>> @@ -5186,6 +5189,15 @@ early_param("movablecore", cmdline_parse_movablecore);
>>
>> #endif /* CONFIG_HAVE_MEMBLOCK_NODE_MAP */
>>
>> +void adjust_managed_page_count(struct page *page, long count)
>> +{
>> + spin_lock(&managed_page_count_lock);
>> + page_zone(page)->managed_pages += count;
>> + totalram_pages += count;
>> + spin_unlock(&managed_page_count_lock);
>> +}
>> +EXPORT_SYMBOL(adjust_managed_page_count);
>> +
>
> Something I should have thought of when I reviewed the patch
> last time, but forgot...
>
> What happens when the hotplug event adds more pages than fit
> in this zone, and some of the pages should go in the next
> zone?
>
> For example, think about a 3GB x86_64 machine, which gets
> 2GB of memory hot-added. Roughly half may get added to the
> DMA32 zone, the rest to the NORMAL zone.
>
> Do the callers of adjust_managed_page_count correctly make
> one call for each zone, or does the above code open up a
> window for a bug?
Hi Rik,
Thanks for review!
Yes, the caller will make one call for each zone. Actually it will
call adjust_managed_page_count() for each page.
Regards!
Gerry


\
 
 \ /
  Last update: 2013-05-08 18:21    [W:0.256 / U:0.112 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site