lkml.org 
[lkml]   [2012]   [May]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    SubjectRe: [PATCH 35/35] autonuma: page_autonuma
    From
    Date
    On Fri, 2012-05-25 at 19:02 +0200, Andrea Arcangeli wrote:
    > Move the AutoNUMA per page information from the "struct page" to a
    > separate page_autonuma data structure allocated in the memsection
    > (with sparsemem) or in the pgdat (with flatmem).
    >
    > This is done to avoid growing the size of the "struct page" and the
    > page_autonuma data is only allocated if the kernel has been booted on
    > real NUMA hardware (or if noautonuma is passed as parameter to the
    > kernel).
    >

    Argh, please fold this change back into the series proper. If you want
    to keep it.. as it is its not really an improvement IMO, see below.

    > +struct page_autonuma {
    > + /*
    > + * FIXME: move to pgdat section along with the memcg and allocate
    > + * at runtime only in presence of a numa system.
    > + */
    > + /*
    > + * To modify autonuma_last_nid lockless the architecture,
    > + * needs SMP atomic granularity < sizeof(long), not all archs
    > + * have that, notably some alpha. Archs without that requires
    > + * autonuma_last_nid to be a long.
    > + */

    Looking at arch/alpha/include/asm/xchg.h it looks to have that just
    fine, so maybe we simply don't support SMP on those early Alphas that
    had that weirdness.

    > +#if BITS_PER_LONG > 32
    > + int autonuma_migrate_nid;
    > + int autonuma_last_nid;
    > +#else
    > +#if MAX_NUMNODES >= 32768
    > +#error "too many nodes"
    > +#endif
    > + /* FIXME: remember to check the updates are atomic */
    > + short autonuma_migrate_nid;
    > + short autonuma_last_nid;
    > +#endif
    > + struct list_head autonuma_migrate_node;
    > +
    > + /*
    > + * To find the page starting from the autonuma_migrate_node we
    > + * need a backlink.
    > + */
    > + struct page *page;
    > +};

    This makes a shadow page frame of 32 bytes per page, or ~0.8% of memory.
    This isn't in fact an improvement.

    The suggestion done by Rik was to have something like a sqrt(nr_pages)
    (?) scaled array of such things containing the list_head and page
    pointer -- and leave the two nids in the regular page frame. Although I
    think you've got to fight the memcg people over that last word in struct
    page.

    That places a limit on the amount of pages that can be in migration
    concurrently, but also greatly reduces the memory overhead.


    \
     
     \ /
      Last update: 2012-05-29 19:21    [W:0.025 / U:0.428 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site