lkml.org 
[lkml]   [2015]   [Jun]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    SubjectRe: [PATCH 5/6] mm, compaction: skip compound pages by order in free scanner
    Date
    On Wed, Jun 10 2015, Vlastimil Babka wrote:
    > The compaction free scanner is looking for PageBuddy() pages and skipping all
    > others. For large compound pages such as THP or hugetlbfs, we can save a lot
    > of iterations if we skip them at once using their compound_order(). This is
    > generally unsafe and we can read a bogus value of order due to a race, but if
    > we are careful, the only danger is skipping too much.
    >
    > When tested with stress-highalloc from mmtests on 4GB system with 1GB hugetlbfs
    > pages, the vmstat compact_free_scanned count decreased by at least 15%.
    >
    > Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
    > Cc: Minchan Kim <minchan@kernel.org>
    > Cc: Mel Gorman <mgorman@suse.de>
    > Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
    > Cc: Michal Nazarewicz <mina86@mina86.com>

    Acked-by: Michal Nazarewicz <mina86@mina86.com>

    > Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
    > Cc: Christoph Lameter <cl@linux.com>
    > Cc: Rik van Riel <riel@redhat.com>
    > Cc: David Rientjes <rientjes@google.com>
    > ---
    > mm/compaction.c | 25 +++++++++++++++++++++++++
    > 1 file changed, 25 insertions(+)
    >
    > diff --git a/mm/compaction.c b/mm/compaction.c
    > index e37d361..4a14084 100644
    > --- a/mm/compaction.c
    > +++ b/mm/compaction.c
    > @@ -437,6 +437,24 @@ static unsigned long isolate_freepages_block(struct compact_control *cc,
    >
    > if (!valid_page)
    > valid_page = page;
    > +
    > + /*
    > + * For compound pages such as THP and hugetlbfs, we can save
    > + * potentially a lot of iterations if we skip them at once.
    > + * The check is racy, but we can consider only valid values
    > + * and the only danger is skipping too much.
    > + */
    > + if (PageCompound(page)) {
    > + unsigned int comp_order = compound_order(page);
    > +
    > + if (comp_order > 0 && comp_order < MAX_ORDER) {

    + if (comp_order < MAX_ORDER) {

    Might produce shorter/faster code. Dunno. Maybe. So much
    micro-optimisations. Applies to the previous patch as well.

    > + blockpfn += (1UL << comp_order) - 1;
    > + cursor += (1UL << comp_order) - 1;
    > + }
    > +
    > + goto isolate_fail;
    > + }
    > +
    > if (!PageBuddy(page))
    > goto isolate_fail;
    >
    > @@ -496,6 +514,13 @@ isolate_fail:
    >
    > }
    >
    > + /*
    > + * There is a tiny chance that we have read bogus compound_order(),
    > + * so be careful to not go outside of the pageblock.
    > + */
    > + if (unlikely(blockpfn > end_pfn))
    > + blockpfn = end_pfn;
    > +
    > trace_mm_compaction_isolate_freepages(*start_pfn, blockpfn,
    > nr_scanned, total_isolated);
    >
    > --
    > 2.1.4
    >

    --
    Best regards, _ _
    .o. | Liege of Serenely Enlightened Majesty of o' \,=./ `o
    ..o | Computer Science, Michał “mina86” Nazarewicz (o o)
    ooo +--<mpn@google.com>--<xmpp:mina86@jabber.org>--ooO--(_)--Ooo--


    \
     
     \ /
      Last update: 2015-06-12 13:01    [W:4.200 / U:0.136 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site