lkml.org 
[lkml]   [2017]   [Aug]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    Patch in this message
    /
    From
    Date
    SubjectRe: [PATCH 1/2] sched/wait: Break up long wake list walk
    On Tue, Aug 22, 2017 at 2:24 PM, Andi Kleen <ak@linux.intel.com> wrote:
    >
    > I believe in this case it's used by threads, so a reference count limit
    > wouldn't help.

    For the first migration try, yes. But if it's some kind of "try and
    try again" pattern, the second time you try and there are people
    waiting for the page, the page count (not the map count) would be
    elevanted.

    So it's possible that depending on exactly what the deeper problem is,
    the "this page is very busy, don't migrate" case might be
    discoverable, and the page count might be part of it.

    However, after PeterZ made that comment that page migration should
    have that should_numa_migrate_memory() filter, I am looking at that
    mpol_misplaced() code.

    And honestly, that MPOL_PREFERRED / MPOL_F_LOCAL case really looks
    like complete garbage to me.

    It looks like garbage exactly because it says "always migrate to the
    current node", but that's crazy - if it's a group of threads all
    running together on the same VM, that obviously will just bounce the
    page around for absolute zero good ewason.

    The *other* memory policies look fairly sane. They basically have a
    fairly well-defined preferred node for the policy (although the
    "MPOL_INTERLEAVE" looks wrong for a hugepage). But
    MPOL_PREFERRED/MPOL_F_LOCAL really looks completely broken.

    Maybe people expected that anybody who uses MPOL_F_LOCAL will also
    bind all threads to one single node?

    Could we perhaps make that "MPOL_PREFERRED / MPOL_F_LOCAL" case just
    do the MPOL_F_MORON policy, which *does* use that "should I migrate to
    the local node" filter?

    IOW, we've been looking at the waiters (because the problem shows up
    due to the excessive wait queues), but maybe the source of the problem
    comes from the numa balancing code just insanely bouncing pages
    back-and-forth if you use that "always balance to local node" thing.

    Untested (as always) patch attached.

    Linus
    mm/mempolicy.c | 7 ++++---
    1 file changed, 4 insertions(+), 3 deletions(-)

    diff --git a/mm/mempolicy.c b/mm/mempolicy.c
    index 618ab125228b..f2d5aab84c49 100644
    --- a/mm/mempolicy.c
    +++ b/mm/mempolicy.c
    @@ -2190,9 +2190,9 @@ int mpol_misplaced(struct page *page, struct vm_area_struct *vma, unsigned long

    case MPOL_PREFERRED:
    if (pol->flags & MPOL_F_LOCAL)
    - polnid = numa_node_id();
    - else
    - polnid = pol->v.preferred_node;
    + goto local_node;
    +
    + polnid = pol->v.preferred_node;
    break;

    case MPOL_BIND:
    @@ -2218,6 +2218,7 @@ int mpol_misplaced(struct page *page, struct vm_area_struct *vma, unsigned long

    /* Migrate the page towards the node whose CPU is referencing it */
    if (pol->flags & MPOL_F_MORON) {
    +local_node:
    polnid = thisnid;

    if (!should_numa_migrate_memory(current, page, curnid, thiscpu))
    \
     
     \ /
      Last update: 2017-08-23 00:53    [W:3.206 / U:0.032 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site