lkml.org 
[lkml]   [2011]   [Feb]   [25]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 2/2] x86,mm,64bit: Round up memory boundary for init_memory_mapping_high()
On 02/25/2011 02:03 AM, Ingo Molnar wrote:
>
> * Yinghai Lu <yinghai@kernel.org> wrote:
>
>> init_memory_mapping_active_regions(unsigned long start, unsigned long end)
>> {
>> struct mapping_work_data data;
>> + int use_gbpages;
>> +
>> + /* see init_memory_mapping() for the setting */
>> +#if defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_KMEMCHECK)
>> + use_gbpages = 0;
>> +#else
>> + use_gbpages = direct_gbpages;
>> +#endif
>
> Sigh. You should *never* ever even think about writing such code. It only results in
> crap, and in crap duplicated elsewhere as well:
>
> if defined(CONFIG_DEBUG_PAGEALLOC) || defined(CONFIG_KMEMCHECK)
> /*
> * For CONFIG_DEBUG_PAGEALLOC, identity mapping will use small pages.
> * This will simplify cpa(), which otherwise needs to support splitting
> * large pages into small in interrupt context, etc.
> */
> use_pse = use_gbpages = 0;
> #else
> use_pse = cpu_has_pse;
> use_gbpages = direct_gbpages;
> #endif

sorry, actually i copied from there.

or i could add max_map_unit_size variable?

Thanks

Yinghai




\
 
 \ /
  Last update: 2011-02-25 21:25    [W:0.094 / U:0.516 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site