lkml.org 
[lkml]   [2015]   [Mar]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC 00/16] Introduce ZONE_CMA
On 02/12/2015 08:32 AM, Joonsoo Kim wrote:
>
> 1) Break non-overlapped zone assumption
> CMA regions could be spread to all memory range, so, to keep all of them
> into one zone, span of ZONE_CMA would be overlap to other zones'.

From patch 13/16 ut seems to me that indeed the ZONE_CMA spans the area of all
other zones. This seems very inefficient for e.g. compaction scanners, which
will repeatedly skip huge amounts of pageblocks that don't belong to ZONE_CMA.
Could you instead pick only a single zone on a node from which you steal the
pages? That would allow to keep the span low.

Another disadvantage I see is that to allocate from ZONE_CMA you will have now
to reclaim enough pages within the zone itself. I think think the cma allocation
supports migrating pages from ZONE_CMA to the adjacent non-CMA zone, which would
be equivalent to migration from MIGRATE_CMA pageblocks to the rest of the zone?

> I'm not sure that there is an assumption about possibility of zone overlap
> But, if ZONE_CMA is introduced, this assumption becomes reality
> so we should deal with this situation. I investigated most of sites
> that iterates pfn on certain zone and found that they normally doesn't
> consider zone overlap. I tried to handle these cases by myself in the
> early of this series. I hope that there is no more site that depends on
> non-overlap zone assumption when iterating pfn on certain zone.
>
> I passed boot test on x86, ARM32 and ARM64. I did some stress tests
> on x86 and there is no problem. Feel free to enjoy and please give me
> a feedback. :)
>
> This patchset is based on v3.18.
>
> Thanks.
>
> [1] https://lkml.org/lkml/2014/5/28/64
> [2] https://lkml.org/lkml/2014/11/4/55
> [3] https://lkml.org/lkml/2014/10/15/623
> [4] https://lkml.org/lkml/2014/5/30/320
>
>
> Joonsoo Kim (16):
> mm/page_alloc: correct highmem memory statistics
> mm/writeback: correct dirty page calculation for highmem
> mm/highmem: make nr_free_highpages() handles all highmem zones by
> itself
> mm/vmstat: make node_page_state() handles all zones by itself
> mm/vmstat: watch out zone range overlap
> mm/page_alloc: watch out zone range overlap
> mm/page_isolation: watch out zone range overlap
> power: watch out zone range overlap
> mm/cma: introduce cma_total_pages() for future use
> mm/highmem: remove is_highmem_idx()
> mm/page_alloc: clean-up free_area_init_core()
> mm/cma: introduce new zone, ZONE_CMA
> mm/cma: populate ZONE_CMA and use this zone when GFP_HIGHUSERMOVABLE
> mm/cma: print stealed page count
> mm/cma: remove ALLOC_CMA
> mm/cma: remove MIGRATE_CMA
>
> arch/x86/include/asm/sparsemem.h | 2 +-
> arch/x86/mm/highmem_32.c | 3 +
> include/linux/cma.h | 9 ++
> include/linux/gfp.h | 31 +++---
> include/linux/mempolicy.h | 2 +-
> include/linux/mm.h | 1 +
> include/linux/mmzone.h | 58 +++++-----
> include/linux/page-flags-layout.h | 2 +
> include/linux/vm_event_item.h | 8 +-
> include/linux/vmstat.h | 26 +----
> kernel/power/snapshot.c | 15 +++
> lib/show_mem.c | 2 +-
> mm/cma.c | 70 ++++++++++--
> mm/compaction.c | 6 +-
> mm/highmem.c | 12 +-
> mm/hugetlb.c | 2 +-
> mm/internal.h | 3 +-
> mm/memory_hotplug.c | 3 +
> mm/mempolicy.c | 3 +-
> mm/page-writeback.c | 8 +-
> mm/page_alloc.c | 223 +++++++++++++++++++++----------------
> mm/page_isolation.c | 14 ++-
> mm/vmscan.c | 2 +-
> mm/vmstat.c | 16 ++-
> 24 files changed, 317 insertions(+), 204 deletions(-)
>



\
 
 \ /
  Last update: 2015-03-05 18:01    [W:0.213 / U:0.272 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site