lkml.org 
[lkml]   [2013]   [Jan]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Subject[PATCH v2 0/9] mm: zone & pgdat accessors plus some cleanup
Date
Summaries:
1 - avoid repeating checks for section in page flags by adding a define.
2 - add & switch to zone_end_pfn() and zone_spans_pfn()
3 - adds zone_is_initialized() & zone_is_empty()
4 - adds a VM_BUG using zone_is_initialized() in __free_one_page()
5 - add pgdat_end_pfn() and pgdat_is_empty()
6 - add debugging message to VM_BUG check.
7 - add ensure_zone_is_initialized() (for memory_hotplug)
8 - use the above addition in memory_hotplug
9 - use pgdat_end_pfn()

As a general concern: spanned_pages & start_pfn (in pgdat & zone) are supposed
to be locked (via a seqlock) when read (due to changes to them via
memory_hotplug), but very few (only 1?) of their users appear to actually lock
them.

--

Since v1:
- drop zone+pgdat growth factoring (I use this in some WIP code to resign the
NUMA node a page belongs to, will send with that patchset)
- merge zone_end_pfn() & zone_spans_pfn() introduction & usage
- split zone_is_initialized() & zone_is_empty() out from the above.
- add a missing semicolon

include/linux/mm.h | 8 ++++++--
include/linux/mmzone.h | 34 +++++++++++++++++++++++++++++----
mm/compaction.c | 10 +++++-----
mm/kmemleak.c | 5 ++---
mm/memory_hotplug.c | 52 ++++++++++++++++++++++++++------------------------
mm/page_alloc.c | 31 +++++++++++++++++-------------
mm/vmstat.c | 2 +-
7 files changed, 89 insertions(+), 53 deletions(-)



\
 
 \ /
  Last update: 2013-01-18 00:41    [W:0.073 / U:1.432 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site