lkml.org 
[lkml]   [1997]   [Jul]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectQNX-style scheduler v1.02
I decided that I had been a little over-zealous in throwing out all
remains of the original scheduler. I think this version works a little
better. Please let me know how well it works for you, and if you have SMP
let me know *if* it works for you... :)

-- Adam
QNX-style Scheduling v1.02 for Linux 2.0

by

Adam McKee


INTRODUCTION

This patch provides QNX-style scheduling for Linux 2.0. The intent is to
provide more flexible and powerful scheduling, and to provide improved
interactive performance under heavy CPU load. This scheduler does not provide
increased throughput however. In fact, there is a very small price to pay in
terms of throughput in order to achieve the aforementioned goals. So that you
can appreciate exactly what the patch does, I will give a (oversimplified)
explanation of how scheduling is normally done under Linux 2.0, followed by a
description of the QNX scheduler (as implemented). If you use this patch, you
will need to re-think how you re-nice tasks, so a brief discussion of
re-nicing tasks is given.


NORMAL SCHEDULING UNDER LINUX 2.0

There is a single run-queue. Each task may be assigned a priority or
"niceness" which will determine how long a timeslice it gets. For example, if
you give a task a niceness of -20, the kernel will not directly use the -20.
but will instead use this number to determine how long/often the task should
be allowed to run. The "counter" attribute of each task determines how much
time it has left in its timeslice. After every task on the run-queue has used
up its timeslice (counter = 0), the counter for each task is reset to the
original priority. This scheduler has some nice properties:

o It obeys the KISS principle (Keep It Simple, Stupid!). There is
always a danger that trying to get "too clever" will introduce
unexpected problems.

o It guarantees that no task will starve.

o It allows the user to have some control over the scheduling.

Some drawbacks:

o Interactive performance under heavy CPU load is not good.

o Limited control over scheduling -- for example, it is not possible
to tell the scheduler to *never* run a particular task unless
*nothing* else wants to run ("idle-eating task"). Those of you who
have been participating in the RC5 contest can appreciate the need
for something like this :-)


THE QNX SCHEDULER

My understanding of the QNX scheduler is based entirely on a short blurb I
read about it, so I would not be surprised to find that the following
discussion contains errors and/or omissions. However, please do read
on... :-)

There are 32 separate run-queues, numbered 0-31. When the scheduler is
looking for a task to run, it will select a task from the lowest-numbered
run-queue that has a runnable task on it. This means that, for example, a
task on run-queue 1 will *not* run until there are no tasks on run-queue 0
that want to run. The init task has a minimum run-queue of 15. Newly
created tasks inherit their minimum run-queue from their parent.

Three scheduling policies are supported:

--- FIFO

The selected task will run until:

o it blocks
-OR-
o a task on a lower-numbered run-queue wants to run

--- Round-Robin

The selected task will run until:

o it blocks
-OR-
o a task on a lower-numbered run-queue wants to run
-OR
o 200ms have passed

--- Adaptive

This is the default policy, and the most interesting policy. Adaptive
scheduling is like Round-Robin, but it also tries to "intelligently" move
tasks between run-queues in order to provide good interactive response.
Here are the rules for adaptive scheduling:

o Each task has a 'minimum run-queue' attribute that tells the
scheduler the lowest-numbered run-queue the task can be on.
"Normal" tasks have minimum run-queue = 15.

o When a task is initially started, it is placed on its minimum
run-queue.

o If a task blocks (does not use all of its timeslice), it will
be placed on its minimum run-queue when it becomes runnable again.

o If a task uses up all of its timeslice, and there is at least one
other task on the same run-queue that wants to run, its run-queue
will be incremented ("demotion").

o If a task has been starving for one second, and its current run-queue
is greater than its minimum run-queue, its run-queue will be
decremented ("promotion").

The result of applying these rules is that tasks with heavy CPU requirements
will tend to migrate to higher-numbered run-queues, whereas tasks with light
CPU requirements will tend to stay on lower-numbered run-queues. This is
*good* for interactive performance!


RE-NICING OF TASKS WITH QNX SCHEDULING

When you re-nice a task, you are actually changing its minimum run-queue. For
example, if you give your X-server a niceness of -20, you are actually setting
its minimum run-queue to 0. If you give it a niceness of 20, you are setting
its minimum run-queue to 31. The priority of a task is its *current*
run-queue (which may be larger than its minimum run-queue in the case of an
adaptively scheduled task).

You must take care when re-nicing tasks. Unlike the normal Linux scheduler,
the QNX scheduler does not guarantee tasks will not starve. When you re-nice
a task, you may create one of the following problems:

o The task hogs the CPU, and starves out other tasks (in the case of
a negative niceness).

o The task is starved out by higher-priority tasks (in the case of a
positive niceness).

Here are a few general tips for re-nicing tasks:

o On a machine whose primary function is web-serving or news-serving,
you may want to give the httpd or innd task a negative niceness.
Other tasks would then only be allowed to consume "left-over" CPU
time.

o It's probably a good idea to give your X-server a negative niceness.
Interactive performance will likely benefit from this.

o When starting a CPU-intensive job that may take awhile to complete,
you may want to give it a positive niceness to ensure the absolute
minimum impact on interactive performance. When compiling a kernel,
you might do 'nice make zlilo' -- users of the system would probably
not even notice any slowdown!

o It's almost never a good idea to give a totally CPU-intensive task
a negative niceness. Doing this with the normal Linux scheduler
can result in a sluggish system -- with this scheduler it can result
in an *unusable* system.

In general, don't re-nice a task unless you understand how the scheduler
works, and you can really convince yourself that it's a good idea.


CONCLUSION

Please let me know how well this patch works for you. I am particularly
interested to hear from SMP users, as I do not have SMP myself and the patch
is currently *untested* with SMP. Also, if you find my implementation of QNX
scheduling is incorrect or incomplete, I would be very glad to hear from you.

Happy task-switching :-)

-- Adam McKee <amckee@poboxes.com>
--- linux/fs/proc/array.c.orig Mon Jul 28 13:13:50 1997
+++ linux/fs/proc/array.c Mon Jul 28 00:44:09 1997
@@ -682,12 +682,34 @@
else
tty_pgrp = -1;

- /* scale priority and nice values from timeslices to -20..20 */
+ /* scale priority and nice values from run-queue # to -20..20 */
/* to make it look like a "normal" unix priority/nice value */
- priority = tsk->counter;
- priority = 20 - (priority * 10 + DEF_PRIORITY / 2) / DEF_PRIORITY;
- nice = tsk->priority;
- nice = 20 - (nice * 20 + DEF_PRIORITY / 2) / DEF_PRIORITY;
+ priority = tsk->run_q;
+ if (priority < DEF_RUN_QUEUE) {
+ priority *= 19;
+ priority /= (DEF_RUN_QUEUE - 1);
+ priority = -20 + priority;
+ } else if (priority == DEF_RUN_QUEUE) {
+ priority = 0;
+ } else {
+ priority -= (DEF_RUN_QUEUE + 1);
+ priority *= 19;
+ priority /= (NR_RUN_QUEUES - DEF_RUN_QUEUE - 2);
+ priority += 1;
+ }
+ nice = tsk->run_q_min;
+ if (nice < DEF_RUN_QUEUE) {
+ nice *= 19;
+ nice /= (DEF_RUN_QUEUE - 1);
+ nice = -20 + nice;
+ } else if (nice == DEF_RUN_QUEUE) {
+ nice = 0;
+ } else {
+ nice -= (DEF_RUN_QUEUE + 1);
+ nice *= 19;
+ nice /= (NR_RUN_QUEUES - DEF_RUN_QUEUE - 2);
+ nice += 1;
+ }

return sprintf(buffer,"%d (%s) %c %d %d %d %d %d %lu %lu \
%lu %lu %lu %lu %lu %ld %ld %ld %ld %ld %ld %lu %lu %ld %lu %lu %lu %lu %lu \
--- linux/kernel/sys.c.orig Mon Jul 28 13:13:16 1997
+++ linux/kernel/sys.c Mon Jul 28 00:44:09 1997
@@ -67,23 +67,25 @@
{
struct task_struct *p;
int error = ESRCH;
- unsigned int priority;
+ unsigned int run_q;

if (which > 2 || which < 0)
return -EINVAL;

- /* normalize: avoid signed division (rounding problems) */
- priority = niceval;
- if (niceval < 0)
- priority = -niceval;
- if (priority > 20)
- priority = 20;
- priority = (priority * DEF_PRIORITY + 10) / 20 + DEF_PRIORITY;
-
- if (niceval >= 0) {
- priority = 2*DEF_PRIORITY - priority;
- if (!priority)
- priority = 1;
+ if (niceval < -20) niceval = -20;
+ else if (niceval > 20) niceval = 20;
+ if (niceval < 0) {
+ run_q = -niceval - 1;
+ run_q *= (DEF_RUN_QUEUE - 1);
+ run_q /= 19;
+ run_q = DEF_RUN_QUEUE - 1 - run_q;
+ } else if (!niceval) {
+ run_q = DEF_RUN_QUEUE;
+ } else {
+ run_q = niceval - 1;
+ run_q *= (NR_RUN_QUEUES - DEF_RUN_QUEUE - 2);
+ run_q /= 19;
+ run_q = DEF_RUN_QUEUE + 1 + run_q;
}

for_each_task(p) {
@@ -96,10 +98,10 @@
}
if (error == ESRCH)
error = 0;
- if (priority > p->priority && !suser())
- error = EACCES;
+ if (run_q < p->run_q_min && !suser())
+ error = EACCES;
else
- p->priority = priority;
+ p->run_q_sw = run_q;
}
return -error;
}
@@ -112,7 +114,7 @@
asmlinkage int sys_getpriority(int which, int who)
{
struct task_struct *p;
- long max_prio = -ESRCH;
+ unsigned int run_q_min = NR_RUN_QUEUES;

if (which > 2 || which < 0)
return -EINVAL;
@@ -120,14 +122,27 @@
for_each_task (p) {
if (!proc_sel(p, which, who))
continue;
- if (p->priority > max_prio)
- max_prio = p->priority;
+ if (p->run_q_min < run_q_min)
+ run_q_min = p->run_q_min;
}

- /* scale the priority from timeslice to 0..40 */
- if (max_prio > 0)
- max_prio = (max_prio * 20 + DEF_PRIORITY/2) / DEF_PRIORITY;
- return max_prio;
+ if (run_q_min == NR_RUN_QUEUES)
+ return -ESRCH;
+
+ /* scale the run_q to 0..40 */
+ if (run_q_min < DEF_RUN_QUEUE) {
+ run_q_min *= 19;
+ run_q_min /= (DEF_RUN_QUEUE - 1);
+ } else if (run_q_min == DEF_RUN_QUEUE) {
+ run_q_min = 20;
+ } else {
+ run_q_min -= (DEF_RUN_QUEUE + 1);
+ run_q_min *= 19;
+ run_q_min /= (NR_RUN_QUEUES - DEF_RUN_QUEUE - 2);
+ run_q_min += 21;
+ }
+ run_q_min = 40 - run_q_min;
+ return run_q_min;
}

#ifndef __alpha__
--- linux/kernel/sched.c.orig Mon Jul 28 13:13:28 1997
+++ linux/kernel/sched.c Mon Jul 28 18:02:39 1997
@@ -7,6 +7,7 @@
* 1996-12-23 Modified by Dave Grothe to fix bugs in semaphores and
* make semaphores SMP safe
* 1997-01-28 Modified by Finn Arne Gangstad to make timers scale better.
+ * 1997-07-30 Modified by Adam McKee to use multilevel feedback scheduling.
*/

/*
@@ -92,6 +93,7 @@
static struct fs_struct init_fs = INIT_FS;
static struct files_struct init_files = INIT_FILES;
static struct signal_struct init_signals = INIT_SIGNALS;
+static struct task_struct * run_q[NR_RUN_QUEUES];

struct mm_struct init_mm = INIT_MM;
struct task_struct init_task = INIT_TASK;
@@ -107,6 +109,7 @@

static inline void add_to_runqueue(struct task_struct * p)
{
+ struct task_struct *q_head = run_q[p->run_q];
#ifdef __SMP__
int cpu=smp_processor_id();
#endif
@@ -116,12 +119,24 @@
return;
}
#endif
- if (p->counter > current->counter + 3)
+ if ((p->run_q < current->run_q) ||
+ ((p->run_q == current->run_q) &&
+ (p->counter > current->counter + 3)))
need_resched = 1;
nr_running++;
- (p->prev_run = init_task.prev_run)->next_run = p;
- p->next_run = &init_task;
- init_task.prev_run = p;
+
+ if (q_head == NULL) {
+ run_q[p->run_q] = p;
+ p->prev_run = p;
+ p->next_run = p;
+ } else {
+ p->prev_run = q_head->prev_run;
+ p->next_run = q_head;
+ q_head->prev_run->next_run = p;
+ q_head->prev_run = p;
+ }
+ p->last_run = jiffies;
+
#ifdef __SMP__
/* this is safe only if called with cli()*/
while(set_bit(31,&smp_process_available))
@@ -154,8 +169,9 @@

static inline void del_from_runqueue(struct task_struct * p)
{
- struct task_struct *next = p->next_run;
struct task_struct *prev = p->prev_run;
+ struct task_struct *next = p->next_run;
+ struct task_struct *q_head = run_q[p->run_q];

#if 1 /* sanity tests */
if (!next || !prev) {
@@ -172,26 +188,53 @@
return;
}
nr_running--;
- next->prev_run = prev;
+
+ /* remove links to p */
prev->next_run = next;
- p->next_run = NULL;
+ next->prev_run = prev;
+ /* set p's links */
p->prev_run = NULL;
+ p->next_run = NULL;
+ /* if p was the q_head, reset q_head */
+ if (p == q_head) {
+ if (next == p)
+ run_q[p->run_q] = NULL;
+ else
+ run_q[p->run_q] = next;
+ }
}

-static inline void move_last_runqueue(struct task_struct * p)
+static inline void switch_runqueues(struct task_struct * p, unsigned int new_run_q)
{
- struct task_struct *next = p->next_run;
struct task_struct *prev = p->prev_run;
+ struct task_struct *next = p->next_run;
+ struct task_struct *q_head = run_q[p->run_q];

- /* remove from list */
- next->prev_run = prev;
+ /* remove links to p */
prev->next_run = next;
- /* add back to list */
- p->next_run = &init_task;
- prev = init_task.prev_run;
- init_task.prev_run = p;
- p->prev_run = prev;
- prev->next_run = p;
+ next->prev_run = prev;
+ /* if p was the q_head, reset q_head */
+ if (p == q_head) {
+ if (next == p)
+ run_q[p->run_q] = NULL;
+ else
+ run_q[p->run_q] = next;
+ }
+
+ p->run_q = new_run_q;
+ q_head = run_q[p->run_q];
+ if (q_head == NULL) {
+ run_q[p->run_q] = p;
+ p->prev_run = p;
+ p->next_run = p;
+ } else {
+ p->prev_run = q_head->prev_run;
+ p->next_run = q_head;
+ q_head->prev_run->next_run = p;
+ q_head->prev_run = p;
+ }
+ p->last_run = jiffies;
+ p->counter = 0;
}

/*
@@ -204,7 +247,7 @@
*/
inline void wake_up_process(struct task_struct * p)
{
- unsigned long flags;
+ unsigned long flags;

save_flags(flags);
cli();
@@ -255,8 +298,8 @@
* runqueue (taking priorities within processes
* into account).
*/
- if (p->policy != SCHED_OTHER)
- return 1000 + p->rt_priority;
+ if (p->policy != SCHED_ADAPTIVE)
+ return 1000 + p->rt_priority;

/*
* Give the process a first-approximation goodness value
@@ -284,25 +327,19 @@
}

/*
- * 'schedule()' is the scheduler function. It's a very simple and nice
- * scheduler: it's not perfect, but certainly works for most things.
- *
- * The goto is "interesting".
- *
- * NOTE!! Task 0 is the 'idle' task, which gets called when no other
- * tasks can run. It can not be killed, and it cannot sleep. The 'state'
- * information in task[0] is never used.
+ * schedule() does run-queue maintenance, and picks a process to run.
+ * It is a QNX-like scheduler that uses 32 separate run-queues.
*/
asmlinkage void schedule(void)
{
- int c;
+ static int next_maintenance = 0;
+ int c, q;
struct task_struct * p;
struct task_struct * prev, * next;
unsigned long timeout = 0;
int this_cpu=smp_processor_id();

-/* check alarm, wake up any interruptible tasks that have got a signal */
-
+ /* check alarm, wake up any interruptible tasks that have got a signal */
if (intr_count)
goto scheduling_in_interrupt;

@@ -320,8 +357,11 @@
/* move an exhausted RR process to be last.. */
if (!prev->counter && prev->policy == SCHED_RR) {
prev->counter = prev->priority;
- move_last_runqueue(prev);
+ run_q[prev->run_q] = prev->next_run;
}
+ /* move the last process run to the end of the queue */
+ if (prev->pid && prev->policy != SCHED_FIFO)
+ run_q[prev->run_q] = prev->next_run;
switch (prev->state) {
case TASK_INTERRUPTIBLE:
if (prev->signal & ~prev->blocked)
@@ -336,11 +376,42 @@
}
default:
del_from_runqueue(prev);
+ prev->run_q = prev->run_q_min;
+ prev->counter = DEF_PRIORITY;
case TASK_RUNNING:
}
- p = init_task.next_run;
sti();
-
+
+ /*
+ * Take care of re-niced processes, promote starving processes
+ */
+ if (jiffies >= next_maintenance) {
+ for_each_task(p) {
+ if (p->run_q_sw < NR_RUN_QUEUES) {
+ cli();
+ if (p->next_run) {
+ p->run_q_min = p->run_q_sw;
+ if (p->run_q != p->run_q_sw)
+ switch_runqueues(p, p->run_q_sw);
+ } else {
+ p->run_q_min = p->run_q = p->run_q_sw;
+ }
+ p->run_q_sw = NR_RUN_QUEUES;
+ sti();
+ } else if (p->next_run &&
+ (jiffies - p->last_run >= HZ) &&
+ (p->run_q > p->run_q_min) &&
+ prev->pid &&
+ (prev->run_q < p->run_q))
+ {
+ cli();
+ switch_runqueues(p, p->run_q - 1);
+ sti();
+ }
+ }
+ next_maintenance = jiffies + (500*HZ/1000);
+ }
+
#ifdef __SMP__
/*
* This is safe as we do not permit re-entry of schedule()
@@ -349,35 +420,57 @@
#define idle_task (task[cpu_number_map[this_cpu]])
#else
#define idle_task (&init_task)
-#endif
+#endif

/*
* Note! there may appear new tasks on the run-queue during this, as
* interrupts are enabled. However, they will be put on front of the
* list, so our list starting at "p" is essentially fixed.
*/
-/* this is the scheduler proper: */
+ /* pick a process */
c = -1000;
next = idle_task;
- while (p != &init_task) {
- int weight = goodness(p, prev, this_cpu);
- if (weight > c)
- c = weight, next = p;
- p = p->next_run;
+ for (q = 0; q < NR_RUN_QUEUES; q++) {
+ p = run_q[q];
+ if (!p) continue;
+ do {
+ int weight = goodness(p, prev, this_cpu);
+ if (weight > c) {
+ c = weight;
+ next = p;
+ }
+ p = p->next_run;
+ } while (p != run_q[q]);
+ if (!c) {
+ p = run_q[q];
+ do {
+ p->counter = DEF_PRIORITY;
+ p = p->next_run;
+ } while (p != run_q[q]);
+ }
+ if (c > -1000) break;
}
-
- /* if all runnable processes have "counter == 0", re-calculate counters */
- if (!c) {
- for_each_task(p)
- p->counter = (p->counter >> 1) + p->priority;
+ next->last_run = jiffies;
+ /* Possibly demote the previous task */
+ if (prev->pid && next->pid &&
+ prev->next_run &&
+ !prev->counter &&
+ (next != prev) &&
+ (q == prev->run_q) &&
+ (q < NR_RUN_QUEUES - 2) &&
+ (prev->policy == SCHED_ADAPTIVE))
+ {
+ cli();
+ switch_runqueues(prev, prev->run_q + 1);
+ sti();
}
+
#ifdef __SMP__
/*
* Allocate process to CPU
*/
-
- next->processor = this_cpu;
- next->last_processor = this_cpu;
+ next->processor = this_cpu;
+ next->last_processor = this_cpu;
#endif
#ifdef __SMP_PROF__
/* mark processor running an idle thread */
@@ -1121,7 +1214,7 @@
do_process_times(p, user, system);
do_it_virt(p, user);
do_it_prof(p, ticks);
-}
+}

static void update_process_times(unsigned long ticks, unsigned long system)
{
@@ -1134,7 +1227,7 @@
p->counter = 0;
need_resched = 1;
}
- if (p->priority < DEF_PRIORITY)
+ if (p->run_q > DEF_RUN_QUEUE)
kstat.cpu_nice += user;
else
kstat.cpu_user += user;
@@ -1380,16 +1473,16 @@
if (policy < 0)
policy = p->policy;
else if (policy != SCHED_FIFO && policy != SCHED_RR &&
- policy != SCHED_OTHER)
+ policy != SCHED_ADAPTIVE)
return -EINVAL;

/*
* Valid priorities for SCHED_FIFO and SCHED_RR are 1..99, valid
- * priority for SCHED_OTHER is 0.
+ * priority for SCHED_ADAPTIVE is 0.
*/
if (lp.sched_priority < 0 || lp.sched_priority > 99)
return -EINVAL;
- if ((policy == SCHED_OTHER) != (lp.sched_priority == 0))
+ if ((policy == SCHED_ADAPTIVE) != (lp.sched_priority == 0))
return -EINVAL;

if ((policy == SCHED_FIFO || policy == SCHED_RR) && !suser())
@@ -1400,9 +1493,10 @@

p->policy = policy;
p->rt_priority = lp.sched_priority;
+ p->run_q_sw = p->run_q_min;
cli();
- if (p->next_run)
- move_last_runqueue(p);
+ if (p->next_run)
+ run_q[p->run_q] = p->next_run;
sti();
need_resched = 1;
return 0;
@@ -1460,10 +1554,10 @@
{
cli();
if (current->next_run) {
- move_last_runqueue(current);
- current->counter = 0;
- need_resched = 1;
- }
+ run_q[current->run_q] = current->next_run;
+ current->counter = 0;
+ need_resched = 1;
+ }
sti();
return 0;
}
@@ -1471,11 +1565,11 @@
asmlinkage int sys_sched_get_priority_max(int policy)
{
switch (policy) {
+ case SCHED_ADAPTIVE:
+ return 0;
case SCHED_FIFO:
case SCHED_RR:
return 99;
- case SCHED_OTHER:
- return 0;
}

return -EINVAL;
@@ -1484,11 +1578,11 @@
asmlinkage int sys_sched_get_priority_min(int policy)
{
switch (policy) {
+ case SCHED_ADAPTIVE:
+ return 0;
case SCHED_FIFO:
case SCHED_RR:
return 1;
- case SCHED_OTHER:
- return 0;
}

return -EINVAL;
@@ -1555,7 +1649,7 @@
return -EINVAL;

if (t.tv_sec == 0 && t.tv_nsec <= 2000000L &&
- current->policy != SCHED_OTHER) {
+ current->policy != SCHED_ADAPTIVE) {
/*
* Short delay requests up to 2 ms will be handled with
* high precision by a busy wait for all real-time processes.
@@ -1645,7 +1739,7 @@
* We have to do a little magic to get the first
* process right in SMP mode.
*/
- int cpu=smp_processor_id();
+ int i, cpu=smp_processor_id();
#ifndef __SMP__
current_set[cpu]=&init_task;
#else
@@ -1656,4 +1750,8 @@
init_bh(TIMER_BH, timer_bh);
init_bh(TQUEUE_BH, tqueue_bh);
init_bh(IMMEDIATE_BH, immediate_bh);
+
+ for (i = 0; i < NR_RUN_QUEUES; i++)
+ run_q[i] = NULL;
+ printk("QNX-style scheduling v1.02 <amckee@poboxes.com>\n");
}
--- linux/kernel/fork.c.orig Mon Jul 28 13:13:35 1997
+++ linux/kernel/fork.c Mon Jul 28 14:01:39 1997
@@ -264,6 +264,8 @@
p->lock_depth = 1;
#endif
p->start_time = jiffies;
+ p->policy = SCHED_ADAPTIVE;
+ p->run_q_sw = NR_RUN_QUEUES;
task[nr] = p;
SET_LINKS(p);
nr_tasks++;
--- linux/include/linux/sched.h.orig Mon Jul 28 13:16:11 1997
+++ linux/include/linux/sched.h Mon Jul 28 02:16:54 1997
@@ -87,10 +87,13 @@
#define TASK_STOPPED 4
#define TASK_SWAPPING 5

+#define NR_RUN_QUEUES 32 /* the scheduler uses 32 run-queues */
+#define DEF_RUN_QUEUE 15 /* default run-queue */
+
/*
* Scheduling policies
*/
-#define SCHED_OTHER 0
+#define SCHED_ADAPTIVE 0
#define SCHED_FIFO 1
#define SCHED_RR 2

@@ -244,6 +247,9 @@
struct mm_struct *mm;
/* signal handlers */
struct signal_struct *sig;
+/* QNX-style scheduler */
+ unsigned long last_run;
+ unsigned int run_q, run_q_min, run_q_sw;
#ifdef __SMP__
int processor;
int last_processor;
@@ -275,7 +281,7 @@
*/
#define _STK_LIM (8*1024*1024)

-#define DEF_PRIORITY (20*HZ/100) /* 200 ms time slices */
+#define DEF_PRIORITY (200*HZ/1000) /* 200 ms time slices */

/*
* INIT_TASK is used to set up the first task table, touch at
@@ -293,7 +299,7 @@
/* suppl grps*/ {NOGROUP,}, \
/* proc links*/ &init_task,&init_task,NULL,NULL,NULL,NULL, \
/* uid etc */ 0,0,0,0,0,0,0,0, \
-/* timeout */ 0,SCHED_OTHER,0,0,0,0,0,0,0, \
+/* timeout */ 0,SCHED_ADAPTIVE,0,0,0,0,0,0,0, \
/* timer */ { NULL, NULL, 0, 0, it_real_fn }, \
/* utime */ 0,0,0,0,0, \
/* flt */ 0,0,0,0,0,0, \
@@ -309,6 +315,7 @@
/* files */ &init_files, \
/* mm */ &init_mm, \
/* signals */ &init_signals, \
+/* QNX sched */ 0, DEF_RUN_QUEUE, DEF_RUN_QUEUE, NR_RUN_QUEUES \
}

extern struct mm_struct init_mm;
\
 
 \ /
  Last update: 2005-03-22 13:40    [W:0.047 / U:0.036 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site