lkml.org 
[lkml]   [2018]   [Sep]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    SubjectRe: [PATCH v4 2/5] mm: Create non-atomic version of SetPageReserved for init use
    Date


    On 9/20/18 6:27 PM, Alexander Duyck wrote:
    > It doesn't make much sense to use the atomic SetPageReserved at init time
    > when we are using memset to clear the memory and manipulating the page
    > flags via simple "&=" and "|=" operations in __init_single_page.
    >
    > This patch adds a non-atomic version __SetPageReserved that can be used
    > during page init and shows about a 10% improvement in initialization times
    > on the systems I have available for testing. On those systems I saw
    > initialization times drop from around 35 seconds to around 32 seconds to
    > initialize a 3TB block of persistent memory. I believe the main advantage
    > of this is that it allows for more compiler optimization as the __set_bit
    > operation can be reordered whereas the atomic version cannot.
    >
    > I tried adding a bit of documentation based on commit <f1dd2cd13c4> ("mm,
    > memory_hotplug: do not associate hotadded memory to zones until online").
    >
    > Ideally the reserved flag should be set earlier since there is a brief
    > window where the page is initialization via __init_single_page and we have
    > not set the PG_Reserved flag. I'm leaving that for a future patch set as
    > that will require a more significant refactor.
    >
    > Acked-by: Michal Hocko <mhocko@suse.com>
    > Signed-off-by: Alexander Duyck <alexander.h.duyck@linux.intel.com>

    Reviewed-by: Pavel Tatashin <pavel.tatashin@microsoft.com>

    > ---
    >
    > v4: Added comment about __set_bit vs set_bit to the patch description
    >
    > include/linux/page-flags.h | 1 +
    > mm/page_alloc.c | 9 +++++++--
    > 2 files changed, 8 insertions(+), 2 deletions(-)
    >
    > diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
    > index 934f91ef3f54..50ce1bddaf56 100644
    > --- a/include/linux/page-flags.h
    > +++ b/include/linux/page-flags.h
    > @@ -303,6 +303,7 @@ static inline void page_init_poison(struct page *page, size_t size)
    >
    > PAGEFLAG(Reserved, reserved, PF_NO_COMPOUND)
    > __CLEARPAGEFLAG(Reserved, reserved, PF_NO_COMPOUND)
    > + __SETPAGEFLAG(Reserved, reserved, PF_NO_COMPOUND)
    > PAGEFLAG(SwapBacked, swapbacked, PF_NO_TAIL)
    > __CLEARPAGEFLAG(SwapBacked, swapbacked, PF_NO_TAIL)
    > __SETPAGEFLAG(SwapBacked, swapbacked, PF_NO_TAIL)
    > diff --git a/mm/page_alloc.c b/mm/page_alloc.c
    > index 712cab17f86f..29bd662fffd7 100644
    > --- a/mm/page_alloc.c
    > +++ b/mm/page_alloc.c
    > @@ -1239,7 +1239,12 @@ void __meminit reserve_bootmem_region(phys_addr_t start, phys_addr_t end)
    > /* Avoid false-positive PageTail() */
    > INIT_LIST_HEAD(&page->lru);
    >
    > - SetPageReserved(page);
    > + /*
    > + * no need for atomic set_bit because the struct
    > + * page is not visible yet so nobody should
    > + * access it yet.
    > + */
    > + __SetPageReserved(page);
    > }
    > }
    > }
    > @@ -5513,7 +5518,7 @@ void __meminit memmap_init_zone(unsigned long size, int nid, unsigned long zone,
    > page = pfn_to_page(pfn);
    > __init_single_page(page, pfn, zone, nid);
    > if (context == MEMMAP_HOTPLUG)
    > - SetPageReserved(page);
    > + __SetPageReserved(page);
    >
    > /*
    > * Mark the block movable so that blocks are reserved for
    >
    \
     
     \ /
      Last update: 2018-09-21 21:06    [W:3.751 / U:0.024 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site