lkml.org 
[lkml]   [1998]   [Sep]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: PCI_LATENCY_TIMER
Edward Welbon wrote:

> "The specification states that this read-only register specifies "how
> often" the device needs access to the PCI bus (in increments of 1/4 of a
> microsecond or 250ns). A value of zero indicates that the device has no
> stringent requirement in this area.
>
> "In the author's opinion, this description (i.e., "how often") is a little
> unclear. The name of the register MAX_LAT, indicates to the author that
> if defines how quickly the master would like to access the bus i.e., its
> GNT# asserted by the arbiter) after it asserts its REQ#. If this is the
> case, then the value hardwired into this register would be used by the
> configuration software to determine the priority level the bus arbitrer
> assigns to the master"

This is also no where near my understanding of that register. I have not
read the spec, so I can't comment. I'm basing what I say upon the
documentation I have for the Adaptec controllers which includes how they
interpret this register is to be used by the PCI BIOS and bus. From what
I'm reading in the documentation, this register would indicate to the BIOS
how long after the last PCI request by this device was completed it would be
until the next request needs to be started. In the latest Adaptec
documentation, they back this up with the calculations they used to hardwire
the MIN_GNT and MAX_LAT values into the ultra2 chipsets.

MIN_GNT = 9.7us / .25us = 39 (27h)

512bytes (in the data FIFO)
--------------------------- = 9.7us
(133 - 80)MByte/s

Here they are stating that the MIN_GNT needed for them to transfer their
entire 512byte FIFO to system memory while the SCSI bus is *also*
transferring into that same FIFO at 80MByte/s is 9.7us. Therefore, at peak
operation, the minimum needed amount of time to complete the entire transfer
is the MIN_GNT registers 9.75us setting. Older cards didn't bother to
calculate the SCSI transfer into the equation and therefore used lower
MIN_GNT values, as well as the non-Ultra2 cards have a smaller 256 byte data
FIFO. For those cards, a 256 byte FIFO and not counting any transfers into
that FIFO while you are transferring across the PCI bus would result in the
MIN_GNT register being 1.92us, which quite nicely fits the MIN_GNT register
setting of 8.

The MAX_LAT register is calculated as:

512 bytes
--------- = 6.4us
80MByte/s

and therefore they set

MAX_LAT = 6.4us / .25us = 25 (19h)

On the older cards, this setting was wrong. If you think about a 256 byte
buffer and a 40MByte/s transfer rate, then you still get the 6.4us.
However, since the transfer rate wasn't included in the MIN_GNT period, then
you have to count from the start of the MIN_GNT until the start of the next
MIN_GNT as 6.4us, and therefore you need to subtract the 2.0us of MIN_GNT
from the MAX_LAT setting, which would be 25 - 8 or 17.

> > > SCSI:
> > > MIN_GNT = 8 --> 8 x 0.25 = 4 micro-seconds
> > > MAX_LAT = 8 --> 8 x 0.25 = 4 micro-seconds
> > > LATENCY_TIMER = 64 --> 64x0.030 = 1.92 micro-seconds
> > >
> > > IDE:
> > > LATENCY_TIMER = 64 --> 64x0.030 = 1.92 micro-seconds
> > >
> > > Network:
> > > MIN_GNT = 8 --> 8 x 0.25 = 4 micro-seconds
> > > MAX_LAT = 28 --> 28x 0.25 = 7 micro-seconds
> > > LATENCY_TIMER = 64 --> 64x0.030 = 1.92 micro-seconds
> > >
> > > If we only take into account these 3 devices, the predictable PCI BUS
> > > latency is 2*1.92 = 3.84 micro-seconds that fits the MAX_LAT requirement
> > > of the SCSI device that is the lowest value for MAX_LAT.
>
> Agreed, assuming that all devices are posting requests and the arbiter
> fufills the requests in a fair manner, the SCSI device will have to wait
> at most for two devices to complete a request prior to its request and
> that this wait time will be less than the MAX_LAT time "desired" by the
> SCSI.

Right. Of course, if there isn't bus contention, then this is all a red
herring anyway, but we all know that.

> > > My comments:
> > >
> > > 1 - The system software that chose a latency timer of 64 for all
> > > devices has not been able to fit the MAX_GNT value due to the SCSI
> > > controller providing it very probably _wrong_ informations, but the
> > > MAX_LAT requirement of all devices has been achieved.
>
> We have disagreed on the latency timers proper settings in the past. In
> my work with network cards, it is can be better to give the network long
> latencies on the bus. The arbitration overhead can get costly if the
> latency is too short. I have seen systems with multiple 100bT cards do
> badly with a latency timer of 32 but do well with a latency timer of 128.
> I have no doubt that this a thing that merits test on a given system.



> > > 2 - A device that desires to be granted 4 us for a BUS transaction and
> > > that want the maximum BUS latency to be at most that 4 us is kind of
> > > shit-maker for PCI BIOSes and PCI drivers that want to make things
> > > fine, unless it is required to be the unique device on a PCI BUS.
>
> I agree, it is not a sane requirement from the scsi device.

Sane is relative. If your device is capable of consuming 80% of the PCI
bus's available bandwidth, and sets the registers to reflect that fact, then
are they being insane or are they merely being truthful? After all, the
device is suppossed to put into these register what that devices needs in a
perfect world to achieve maximum performance, but like the Adaptec
controllers say, they can work with anything from a one PCLK GNT to a parked
status. However, these registers have *no effect* on a running system other
than as a guide to the PCI BIOS in setting things like the LAT_TIMER. It's
the LAT_TIMER that makes all the difference.

> > > > 00:0b.0 SCSI storage controller: Adaptec AIC-7880U
> > >
> > > What a great illumination I have had 4 years ago to go with Symbios
> > > controllers rather than Adaptec ones. ;-)
>
> The Symbios controllers work well for me, I had nothing but misery with
> adaptec stuff. I have nine disk raid on three NCR53c875 cards along with
> one BusLogic 545C ISA card (for boot) in this system that I have pounded
> for days on end with multiple copies of Bonnie in execution + continuous
> kernel builds (8 gig of disk data in flight) with no errors. I was rarely
> able to get a single thread of Bonnie to complete very many times when I
> was using 2940UW and aha1452. Am I correct that the hostile takeover of
> Symbios by Adaptec was nixed by FTC?

Well, without getting into any sort of pissing contest, I remember when you
were reporting problems with the aic7xxx driver and cards, and I'll only
mention that those reports where at least one major and two minor revisions
ago. Saying that your experience then is justification for anything now
would be like stating that you won't use the current linux kernel because
you had problems with version 1.3.71 or some such. Then and now are two
entirely different beasts and generalizations based on that would be
logically false.

> > Your interpretation of the PCI spec and what Adaptec thinks these values
> > mean in terms of the PCI bus are two different things. One of you is wrong
> > about what the MAX_LAT value is all about.
>
> It would not be the first time that the sane interpretation of a Spec was
> not the "correct" interpretation. I think that most controllers ought to
> have a value of zero. The controller needs to be able to live with large
> delays to GNT# without barfing.

Who ever said a controller can't live with that? These registers are a
guide. At worst, cards that have small buffers and no flow control
capability would drop information if these params are not met, but that
doesn't account for very many devices that I'm aware of. The SCSI
sub-system in particular is immune to this problem (simply quit sending ACKs
during a transfer cycle and once the offset value of outstanding REQs has
been sent, the device will quit transferring data, then you can wait
forever to get the bus if need be). If your bus is congested, then you are
going to fail on meeting everything's requirements eventually. If your bus
isn't congested, then the devices can get whatever they want. So, the only
time that there is any justification in saying that a device is a "shit
maker" for the PCI bus is when your PCI bus is overloaded, and then it
doesn't matter what the device is, they are all going to suffer from the bus
congestion.

For example, I have two different 3950U2B controllers in my machine right
now. Each controller has two separate PCI functions. Each function reports
MIN_GNT as 39 and MAX_LAT as 25. Multiply that times 4 and what do you
get? Impossible to meet. Why are they so particular, well, each funtion is
a separate Ultra2 wide SCSI controller and they operate entirely
independantly of each other, and fully in parallel, so the four channels are
capable of 320MB/s of data transfer. That number is so far above the PCI
busses 133MB/s there isn't a chance in hell that the PCI bus could keep up
with all four of them. But, that doesn't mean the MIN_GNT and MAX_LAT
values are wrong, just that the devices are fast enough that the PCI bus
can't possibly keep up with all of them.

--

Doug Ledford <dledford@dialnet.net>
Opinions expressed are my own, but
they should be everybody's.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:44    [W:1.078 / U:0.156 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site