lkml.org 
[lkml]   [2000]   [Sep]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: An elevator algorithm (patch)
On Sun, Sep 17, 2000 at 01:26:22AM +0200, Peter Osterlund wrote:
> Indeed, the elevator logic is somewhat flawed. There are two problems
> with the current code:
>
> 1. The test that decides if we have found a good spot to insert the
> current request doesn't handle the wraparound case correctly. (The
> case when the elevator reaches the end of the disk and starts over
> from the beginning.)
>
> 2. If we can't find a good spot to insert the new request, we
> currently insert it as early as possible in the queue. If no good
> spot is found, it is more efficient and more fair to insert the new
> request last in the queue.

The patch is buggy for non headactive devices like SCSI and also for IDE
while the queue is plugged.

Assume the queue looks like this (it's scsi or IDE plugged):

HEAD A

HEAD is the &q->queue_head, A is a requests in the queue (not under processing
in the IDE case).

A runtime trace would look like this:

head = HEAD
real_head = HEAD

last = A
insert_after = A
entry = A
tmp1 = A
tmp2 = A
entry = HEAD
tmp1 = HEAD
read tmp1 -> you're reading random kernel memory here

While the queue is plugged or with things like SCSI your logic change won't
work because in such case if your request is lower the lowest in the queue, you
can put it at the head of the queue and you have no way to know where your
"tmp1" was placed so you can't make any assumption (that's why the current code
makes sense).

But I had an idea to generalize the algorithm so that we could optimize
SCSI as well and also IDE in the plugged case: nobody forbid us to remeber
the last position of the drive in the request_queue_t to still be able to
know your "tmp1" that actually we don't know about.

If nobody does that before me I will try this "remeber last position of the
head" idea in my blkdev tree (there are many other pending elevator fixes in
it) as soon as I finished with 2.2.18pre9aa1 LFS nfsv3 and as soon as I finish
the fix for the spinlocks (this spinlock "memory" thing got starved over and
over again unfortunately, at least it's not an urgent fix :).

Thanks for your comments.

Andrea
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 12:38    [W:0.139 / U:0.104 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site