lkml.org 
[lkml]   [1999]   [Sep]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRE: Linux and real device drivers
Date
Jes Sorensen [mailto:Jes.Sorensen@cern.ch] wrote:
> >>>>> "Bret" == Bret Indrelee <breti@bit3.com> writes:
> Bret> Jes Sorensen asserts:
> >> Yes, UDI also guarantees you a trendous overhead and almost
> >> certainly lousy performance - last time I looked at the spec, about
> >> a year ago, it certainly wasn't pretty.
>
> Bret> If this isn't just a case of you spreading FUD, could you please
> Bret> give some data to back up your claims?
>
> Bret> To my knowledge, no one has run benchmarks against the current
> Bret> UDI definition. The original prototype was a little bit slower,
> Bret> but performance changes were made to minimize or eliminate these
> Bret> areas.
>
> Again, the last version of UDI I looked at was 0.80. Anyway, UDI means
> going through indirect functions to simply read/write a device
> register since you have no idea how a device is mapped in a certain
> type of machine or operating system .... this alone should be enough
> to prove my point.

It does prove a point, but maybe not the one you intended.

PCI bus normally runs at 33 MHz. It takes at least two clock cycles to do a
write and can take much longer to do a read. This gives us roughly 60 ns
minimum for a write transaction during which time the CPU can do nothing. On
a read, many PCI devices will send a RETRY signal just to give themselves
more time. This can be more in the range of 100s of ns to single digit ms
for a device read.

Modern processors are in excess of 200 MHz. That would come out to about 5
ns/cycle. The processor can execute 12 instructions in the time it takes to
do the minimum PCI cycle. That should be more than enough time to do a
minimal subroutine call. The faster your processor, the worse it gets.

If you want to optimize something, optimize out unneeded accesses to the
device. Don't think that you are running faster merely because you got rid
of one level of subroutine call, you are just fooling yourself.

The other point to keep in mind is that it is quite likely that NGIO will
remove your ability to directly access those device registers. Since it
moves to a channel architechure, any NGIO to PCI bridges are going to need
code in order to allow you access to PCI I/O or memory space.

Now if we look at it from a system perspective, you can gain a lot more
performance by making your mutexs work more efficiently.

Looking at the SMP code in Linux, there is a large difference between what
is required for doing spin_lock_irqsave() vs. spin_lock() for instance.
Since the UDI layer would move almost all of the code out of the interrupt
level, it is possible that a system would run faster using a UDI layer than
the native LINUX drivers. In any event, it would be possible for the LINUX
mutex and semaphore logic to change quite a bit without having to change any
device drivers.

-Bret

-------------------------------------------------------------
SBS Technologies, Connectivity Products
... solutions for real-time connectivity

Bret Indrelee, Engineer
SBS Technologies, Inc., Connectivity Products
1284 Corporate Center Drive, St. Paul MN 55121
Direct: (651) 905-4731
Main: (651) 905-4700 Fax: (651) 905-4701
E-mail: bindrelee@sbs-cp.com http://www.sbs.com
-------------------------------------------------------------

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:54    [W:0.068 / U:0.284 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site