lkml.org 
[lkml]   [2008]   [Feb]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] Document huge memory/cache overhead of memory controller in Kconfig
Andi Kleen wrote:
> Balbir Singh wrote:
>> Andi Kleen wrote:
>>>> 1. We could create something similar to mem_map, we would need to handle 4
>>> 4? At least x86 mainline only has two ways now. flatmem and vmemmap.
>>>
>>>> different ways of creating mem_map.
>>> Well it would be only a single way to create the "aux memory controller
>>> map" (or however it will be called). Basically just a call to single
>>> function from a few different places.
>>>
>>>> 2. On x86 with 64 GB ram,
>>> First i386 with 64GB just doesn't work, at least not with default 3:1
>>> split. Just calculate it yourself how much of the lowmem area is left
>>> after the 64GB mem_map is allocated. Typical rule of thumb is that 16GB
>>> is the realistic limit for 32bit x86 kernels. Worrying about
>>> anything more does not make much sense.
>>>
>> I understand what you say Andi, but nothing in the kernel stops us from
>> supporting 64GB.
>
> Well in practice it just won't work at least at default page offset.
>
>> Should a framework like memory controller make an assumption
>> that not more than 16GB will be configured on an x86 box?
>
> It doesn't need to. Just increase __VMALLOC_RESERVE by the
> respective amount (end_pfn * sizeof(unsigned long))
>
> Then 64GB still won't work in practice, but at least you made no such
> assumption in theory @)
>
> Also there is the issue of memory hotplug. In theory later
> memory hotplugs could fill up vmalloc. Luckily x86 BIOS
> are supposed to declare how much they plan to hot add memory later
> using the SRAT memory hotplug area (in fact the old non sparsemem
> hotadd implementation even relied on that). It would
> be possible to adjust __VMALLOC_RESERVE at boot even for that. I suspect
> this issue could be also just ignored at first; it is unlikely
> to be serious.
>

My concern with all the points you mentioned is that this solution might need to
change again, depending on the factors you've mentioned. vmalloc() is good and
straightforward, but it has these dependencies which could call for another
rewrite of the code.

>
>>>> if we decided to use vmalloc space, we would need 64
>>>> MB of vmalloc'ed memory
>>> Yes and if you increase mem_map you need exactly the same space
>>> in lowmem too. So increasing the vmalloc reservation for this is
>>> equivalent. Just make sure you use highmem backed vmalloc.
>>>
>> I see two problems with using vmalloc. One, the reservation needs to be done
>> across architectures.
>
> Only on 32bit. Ok hacking it into all 32bit architectures might be
> difficult, but I assume it would be ok to rely on the architecture
> maintainers for that and only enable it on some selected architectures
> using Kconfig for now.
>

Yes, but that's not such a good idea

> On 64bit vmalloc should be by default large enough so it could
> be enabled for all 64bit architectures.
>
>> Two, a big vmalloc chunk is not node aware,
>
> vmalloc_node()
>

vmalloc_node() would need to work much the same way as mem_map does. I am
tempted to try the mem_map and radix tree approaches. I think KAMEZAWA is
already working and has a first draft of the radix tree changes ready.

> -Andi


--
Warm Regards,
Balbir Singh
Linux Technology Center
IBM, ISTL


\
 
 \ /
  Last update: 2008-02-22 13:21    [W:0.081 / U:0.136 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site