[lkml]   [1999]   [Mar]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
Messages in this thread
    SubjectRe: disk head scheduling
    On Fri, Mar 19, 1999 at 03:45:11PM -0500, Richard B. Johnson wrote:
    > On Fri, 19 Mar 1999, Arvind Sankar wrote:
    > > Yeah, but some manufacturers are good enough to put it in the tech notes.
    > > Should the scheduling algo be put in as a device strategy function, with
    > > fallback to the current elevator if the device doesnt have one? Then we could
    > > implement two way elevator algos for those hard disks for which we can get
    > > physical geometry info from the data sheets or somewhere.
    > >
    > The typical IDE drive has a single platter and two heads. Just
    > like a floppy disk. The so-called geometry is specified only
    > for compatibility with the real-mode BIOS (0x13) interface that
    > expects heads/sectors/cylinders, like in the old ST-506 interface
    > days.

    IBM claims that my hard disk (Model IBM DHEA-38451) has 4 platters and 8 heads.
    I assume these are physical, since there are either 15 or 16 logical heads in
    CHS mode (settable via jumpers).

    > Any attempt to optimize performance based upon this phony geometry will
    > fail. The best you can do is attempt to keep read/write queues separate.

    Who's talking about optimizing based on phony geometries? Did you read what
    I wrote?

    > In other words, if possible, queue a bunch of reads and do them all
    > together, then queue a bunch of writes and queue them all together. This
    > can cut down some time on the write-splice and other so-called rotational
    > latencies.
    > A write splice occurs when you need to write some new sectors between
    > existing sectors, i.e., you are not going to write an entire track. This
    > often requires that the disc make up to one complete rotation before a
    > write can begin because the drive has to "learn" where the starting sector
    > is by reading sectors. Note, all sectors on hard disks are "soft" which
    > means that there is no index to tell the electronics when to turn on the
    > write current. If writes get queued together, and you have multiple
    > writes, there is a chance some writes will occur on the same track. This
    > means that since the drive already knows where the sectors are on that
    > track, it doesn't have to reread.

    I don't get this. If this were true, then the average latency of a disk would
    be the time for one complete rotation (half a rotation to find the start, and
    half a rotation to actually get to the right place). But the average latency is
    equal to _half_ the time for a complete rotation.

    Bunching requests that are on the same track is good, since the drive
    probably does some read-ahead and caching. But accesses that are far enough
    apart that they can't be to the same track can be done in either order with
    no significant difference in time.

    > There is practically zero probability that optimization based upon phony
    > geometry will accomplish anything because the real geometry boundaries are
    > usually well hidden by the drive's sector buffer. All you do is waste

    IBM is good enough to tell me that there are 8 zones on my platters, numbered
    0 to 7 from the outside in. They also tell me the recording density at both
    ends. From that we can work out a rough estimate of the number of sectors per
    track and the number of tracks in a zone. If two requests are to different
    tracks, then which is performed before the other is largely irrelevant, so I
    can order my requests to minimize the total number of seeks.

    > CPU cycles that could be used elsewhere. What happens is you slightly
    > slow the overall operation of the entire machine without increasing the
    > Disc performance at all. Wasted CPU cycles are never gotten back, you
    > burned them up, making a beautiful I/O queue, then accomplish nothing
    > for your effort.

    Yes, but my machine is almost always waiting for disk I/O. The only time my
    CPU utilization goes over 90% is when I'm running gcc. Usually my idle time is
    what is over 90%.

    The real argument against doing all this is that it will save at most one seek
    per two requests, which is only 5% timewise. Probably not worth all the effort.

    -- arvind

    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to
    Please read the FAQ at

     \ /
      Last update: 2005-03-22 13:50    [W:0.023 / U:19.216 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site