lkml.org 
[lkml]   [2012]   [Jun]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: Early boot panic on machine with lots of memory
From
On Thu, Jun 21, 2012 at 1:17 PM, Tejun Heo <tj@kernel.org> wrote:
> Hello, Yinghai.
>
> On Tue, Jun 19, 2012 at 07:57:45PM -0700, Yinghai Lu wrote:
>> if it is that case, that change could fix other problem problem too.
>> --- during the one free reserved.regions could double the array.
>
> Yeah, that sounds much more attractive to me too.  Some comments on
> the patch tho.
>
>>  /**
>>   * memblock_double_array - double the size of the memblock regions array
>>   * @type: memblock type of the regions array being doubled
>> @@ -216,7 +204,7 @@ static int __init_memblock memblock_doub
>>
>>       /* Calculate new doubled size */
>>       old_size = type->max * sizeof(struct memblock_region);
>> -     new_size = old_size << 1;
>> +     new_size = PAGE_ALIGN(old_size << 1);
>
> We definintely can use some comments explaining why we want page
> alignment.  It's kinda subtle.

yes.

>
> This is a bit confusing here because old_size is the proper size
> without padding while new_size is page aligned size with possible
> padding.  Maybe discerning {old|new}_alloc_size is clearer?  Also, I
> think adding @new_cnt variable which is calculated together would make
> the code easier to follow.  So, sth like,
>
>        /* explain why page aligning is necessary */
>        old_size = type->max * sizeof(struct memblock_region);
>        old_alloc_size = PAGE_ALIGN(old_size);
>
>        new_max = type->max << 1;
>        new_size = new_max * sizeof(struct memblock_region);
>        new_alloc_size = PAGE_ALIGN(new_size);
>
> and use alloc_sizes for alloc/frees and sizes for everything else.

ok, will add new_alloc_size, old_alloc_size.

>
>>  unsigned long __init free_low_memory_core_early(int nodeid)
>>  {
>>       unsigned long count = 0;
>> -     phys_addr_t start, end;
>> +     phys_addr_t start, end, size;
>>       u64 i;
>>
>> -     /* free reserved array temporarily so that it's treated as free area */
>> -     memblock_free_reserved_regions();
>> +     for_each_free_mem_range(i, MAX_NUMNODES, &start, &end, NULL)
>> +             count += __free_memory_core(start, end);
>>
>> -     for_each_free_mem_range(i, MAX_NUMNODES, &start, &end, NULL) {
>> -             unsigned long start_pfn = PFN_UP(start);
>> -             unsigned long end_pfn = min_t(unsigned long,
>> -                                           PFN_DOWN(end), max_low_pfn);
>> -             if (start_pfn < end_pfn) {
>> -                     __free_pages_memory(start_pfn, end_pfn);
>> -                     count += end_pfn - start_pfn;
>> -             }
>> -     }
>> +     /* free range that is used for reserved array if we allocate it */
>> +     size = get_allocated_memblock_reserved_regions_info(&start);
>> +     if (size)
>> +             count += __free_memory_core(start, start + size);
>
> I'm afraid this is too early.  We don't want the region to be unmapped
> yet.  This should only happen after all memblock usages are finished
> which I don't think is the case yet.

No, it is not early. at that time memblock usage is done.

Also I tested one system with huge memory, duplicated the problem on
KVM that Sasha met.
my patch fixes the problem.

please check attached patch.

Also I add another patch to double check if there is any reference
with reserved.region.
so far there is no reference found.

Thanks

Yinghai
[unhandled content-type:application/octet-stream][unhandled content-type:application/octet-stream]
\
 
 \ /
  Last update: 2012-06-22 04:21    [W:0.088 / U:0.108 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site