lkml.org 
[lkml]   [2003]   [Apr]   [24]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [patch] HT scheduler, sched-2.5.68-B2
On Wed, 23 Apr 2003, Andrew Theurer wrote:

> Well on high load, you shouldn't have an idle cpu anyway, so you would never
> pass the requirements for the agressive -idle- steal even if it was turned
> on. On low loads on HT, without this agressive balance on cpu bound tasks,
> you will always load up one core before using any of the others. When you
> fork/exec, the child will start on the same runqueue as the parent, the idle
> sibling will start running it, and it will never get a chance to balance
> properly while it's in a run state. This is the same behavior I saw with the
> NUMA-HT solution, because I didn't have this agressive balance (although it
> could be added I suppose), and as a result it consistently performed less
> than Ingo's solution (but still better than no patch at all).

Sorry if I misunderstand, but if HT is present, I would think that you
would want to start the child of a fork on the same runqueue, because the
cache is loaded, and to run the child first because in many cases the
child will do and exac. At that point it is probable that the exec'd
process run on another CPU would leave the cache useful to the parent.

I fully admit that this is "seems to me" rather than measured, but
protecting the cache is certainly a good thing in general.


On Wed, 23 Apr 2003, Martin J. Bligh wrote:

> Actually, what must be happening here is that we're agressively stealing
> things on non-HT machines ... we should be able to easily prevent that.
>
> Suppose we have 2 real cpus + HT ... A,B,C,D are the cpus, where A, B
> are HT twins, and C,D are HT twins. A&B share runqueue X, and C&D share
> runqueue Y
>
> What I presume you're trying to do is when A and B are running 1 task
> each, and C and D are not running anything, balance out so we have
> one on A and one on C. If we define some "nr_active(rq)" concept to be
> the number of tasks actually actively running on cpus, then if we we're
> switching from nr_actives of 2/0 to 1/0.
>
> However, we don't want to switch from 2/1 to 1/2 ... that's pointless.
> Or 0/1 to 1/0 (which I think it's what's happening). But in the case
> where we had (theoretically) 4 HT siblings per real cpu, we would want
> to migrate 1/3 to 2/2.
>
> The key is that we want to agressively steal when
> nr_active(remote) - nr_active(idle) > 1 ... not > 0.
> This will implicitly *never* happen on non HT machines, so it seems
> like a nice algorithm ... ?

Is it really that simple?

--
bill davidsen <davidsen@tmr.com>
CTO, TMR Associates, Inc
Doing interesting things with little computers since 1979.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:35    [W:0.099 / U:0.192 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site