lkml.org 
[lkml]   [2001]   [Apr]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: a quest for a better scheduler
On Tue, Apr 03, 2001 at 10:55:12AM +0200, Ingo Molnar wrote:
>
> you can easily make the scheduler fast in the many-processes case by
> sacrificing functionality in the much more realistic, few-processes case.
> None of the patch i've seen so far maintained the current scheduler's
> few-processes logic. But i invite you to improve the current scheduler's
> many-processes behavior, without hurting its behavior in the few-processes
> case.
>

Maintaining the current scheduler's logic is exactly what we are trying
to do in the projects at:

http://lse.sourceforge.net/scheduling/

A common design goal for the the two alternative scheduler
implementations at this site is to maintain the current scheduler's
behavior/scheduling decisions. In the case of the priority queue
scheduler, we have actually used a copy of the existing scheduler
running in parallel (in the same kernel) to determine if we are
making the same scheduling decisions. Currently, in this implementation
we only deviate from the current scheduler in a small number of cases
where tasks get a boost due to having the same memory map.

The multi-queue implementation is more interesting. It is also
designed to maintain the behavior of the current scheduler. However,
as the runqueues get longer (and we start getting contention on the
runqueue locks) it starts to deviate from existing scheduler behavior
and make more local scheduling decisions. Ideally, this implementation
will exhibit the behavior of the current scheduler at low thread
counts and make more localized decisions as pressure on the scheduler
is increased.

Neither of these implementations are at a point where I would advocate
their adoption; yet.

Can someone tell me what a good workload/benchmark would be to
examine 'low thread count' performance? In the past people have
used the 'spinning on sched_yield' benchmark. However, this now
makes little sense with the sched_yield optimizations introduced
in 2.4. In addition, such a benchmark mostly ignores the
'reschedule_idle' component of the scheduler. We have developed
a 'token passing' benchmark which attempts to address these issues
(called reflex at the above site). However, I would really like
to get a pointer to a community acceptable workload/benchmark for
these low thread cases.

--
Mike Kravetz mkravetz@sequent.com
IBM Linux Technology Center
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:22    [W:0.101 / U:0.152 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site