lkml.org 
[lkml]   [2011]   [Sep]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH 2/8] mm: alloc_contig_freed_pages() added
From
Date
On Wed, 2011-09-21 at 15:17 +0200, Michal Nazarewicz wrote:
> > This 'struct page *'++ stuff is OK, but only for small, aligned areas.
> > For at least some of the sparsemem modes (non-VMEMMAP), you could walk
> > off of the end of the section_mem_map[] when you cross a MAX_ORDER
> > boundary. I'd feel a little bit more comfortable if pfn_to_page() was
> > being done each time, or only occasionally when you cross a section
> > boundary.
>
> I'm fine with that. I've used pointer arithmetic for performance reasons
> but if that may potentially lead to bugs then obviously pfn_to_page()
> should be used

pfn_to_page() on x86 these days is usually:

#define __pfn_to_page(pfn) (vmemmap + (pfn))

Even for the non-vmemmap sparsemem it stays pretty quick because the
section array is in cache as you run through the loop.

There are ways to _minimize_ the number of pfn_to_page() calls by
checking when you cross a section boundary, or even at a
MAX_ORDER_NR_PAGES boundary. But, I don't think it's worth the trouble.

-- Dave



\
 
 \ /
  Last update: 2011-09-21 16:21    [from the cache]
©2003-2011 Jasper Spaans