lkml.org 
[lkml]   [2002]   [Jan]   [10]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [patch] O(1) scheduler, -G1, 2.5.2-pre10, 2.4.17 (fwd)

On Thu, 10 Jan 2002, Mike Kravetz wrote:

> If I run 3 cpu-hog tasks on a 2 CPU system, then 1 task will get an
> entire CPU while the other 2 tasks share the other CPU (easily
> verified by a simple test program). On previous versions of the
> scheduler 'balancing' this load was achieved by the global nature of
> time slices. No task was given a new time slice until the time slices
> of all runnable tasks had expired. In the current scheduler, the
> decision to replenish time slices is made at a local (pre-CPU) level.
> I assume the load balancing code should take care of the above
> workload? OR is this the behavior we desire? [...]

Arguably this is the most extreme situation - every other distribution
(2:3, 3:4) is much less problematic. Will this cause problems? We could
make the fairness-balancer more 'sharp' so that it will oscillate the
length of the two runqueues at a slow pace, but it's still caching loss.

> We certainly have optimal cache use.

indeed. The question is, should we migrate processes around just to get
100% fairness in 'top' output? The (implicit) cost of a task migration
(caused by the destruction & rebuilding of cache state) can be 10
milliseconds easily on a system with big caches.

Ingo

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:15    [W:0.210 / U:0.008 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site