lkml.org 
[lkml]   [2020]   [Feb]   [13]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [RFC 2/4] sched/numa: replace runnable_load_avg by load_avg
On Thu, 13 Feb 2020 at 18:02, Mel Gorman <mgorman@techsingularity.net> wrote:
>
> On Thu, Feb 13, 2020 at 05:38:31PM +0100, Vincent Guittot wrote:
> > > > Your test doesn't explicitly ensure that the 1 condition is met
> > > >
> > > > That being said, I'm not sure it's really a wrong thing ? I mean
> > > > load_balance will probably try to pull back some tasks on src but as
> > > > long as it is not a task with dst node as preferred node, it should
> > > > not be that harmfull
> > >
> > > My thinking was that if source has as many or more running tasks than
> > > the destination *after* the move that it's not harmful and does not add
> > > work for the load balancer.
> >
> > load_balancer will see an imbalance but fbq_classify_group/queue
> > should be there to prevent from pulling back tasks that are on the
> > preferred node but only other tasks
> >
>
> Yes, exactly. Between fbq_classify and migrate_degrades_locality, I'm
> expecting that the load balancer will only override NUMA balancing when
> there is no better option. When the imbalance check, I want to avoid
> the situation where NUMA balancing moves a task for locality, LB pulls
> it back for balance, NUMA retries the move etc because it's stupid. The
> locality matters but being continually dequeue/enqueue is unhelpful.
>
> While there might be grounds for relaxing the degree an imbalance is
> allowed across SD domains, I am avoiding looking in that direction again
> until the load balancer and NUMA balancer stop overriding each other for
> silly reasons (or the NUMA balancer fighting itself which can happen).

make sense

>
> --
> Mel Gorman
> SUSE Labs

\
 
 \ /
  Last update: 2020-02-13 18:16    [W:0.047 / U:0.588 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site