lkml.org 
[lkml]   [2012]   [Nov]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Subject[Patch v4 12/12] memory-hotplug: free node_data when a node is offlined
    Date
    We call hotadd_new_pgdat() to allocate memory to store node_data. So we
    should free it when removing a node.

    CC: David Rientjes <rientjes@google.com>
    CC: Jiang Liu <liuj97@gmail.com>
    CC: Len Brown <len.brown@intel.com>
    CC: Benjamin Herrenschmidt <benh@kernel.crashing.org>
    CC: Paul Mackerras <paulus@samba.org>
    CC: Christoph Lameter <cl@linux.com>
    Cc: Minchan Kim <minchan.kim@gmail.com>
    CC: Andrew Morton <akpm@linux-foundation.org>
    CC: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com>
    CC: Yasuaki Ishimatsu <isimatu.yasuaki@jp.fujitsu.com>
    Signed-off-by: Wen Congyang <wency@cn.fujitsu.com>
    ---
    mm/memory_hotplug.c | 20 +++++++++++++++++++-
    1 file changed, 19 insertions(+), 1 deletion(-)

    diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
    index 449663e..d1451ab 100644
    --- a/mm/memory_hotplug.c
    +++ b/mm/memory_hotplug.c
    @@ -1309,9 +1309,12 @@ static int check_cpu_on_node(void *data)
    /* offline the node if all memory sections of this node are removed */
    static void try_offline_node(int nid)
    {
    + pg_data_t *pgdat = NODE_DATA(nid);
    unsigned long start_pfn = NODE_DATA(nid)->node_start_pfn;
    - unsigned long end_pfn = start_pfn + NODE_DATA(nid)->node_spanned_pages;
    + unsigned long end_pfn = start_pfn + pgdat->node_spanned_pages;
    unsigned long pfn;
    + struct page *pgdat_page = virt_to_page(pgdat);
    + int i;

    for (pfn = start_pfn; pfn < end_pfn; pfn += PAGES_PER_SECTION) {
    unsigned long section_nr = pfn_to_section_nr(pfn);
    @@ -1338,6 +1341,21 @@ static void try_offline_node(int nid)
    */
    node_set_offline(nid);
    unregister_one_node(nid);
    +
    + if (!PageSlab(pgdat_page) && !PageCompound(pgdat_page))
    + /* node data is allocated from boot memory */
    + return;
    +
    + /* free waittable in each zone */
    + for (i = 0; i < MAX_NR_ZONES; i++) {
    + struct zone *zone = pgdat->node_zones + i;
    +
    + if (zone->wait_table)
    + vfree(zone->wait_table);
    + }
    +
    + arch_refresh_nodedata(nid, NULL);
    + arch_free_nodedata(pgdat);
    }

    int __ref remove_memory(int nid, u64 start, u64 size)
    --
    1.8.0


    \
     
     \ /
      Last update: 2012-11-27 11:41    [W:3.450 / U:0.224 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site