Messages in this thread |  | | Date | Sat, 1 Jun 1996 17:41:50 -0700 | From | Tom May <> | Subject | Re: SVGA kernel chipset drivers. |
| |
I've spliced together a few messages in which Linus writes:
First, the executive summary:
>The ONLY solution is X, and anything else is just noise. They should be >supported, but not at the cost of extra complexity in the kernel.
Truly. I have worked on projects where 70% of the coding and debugging and testing effort required to support a board under certain OSs was wasted on the goddamn text-graphics-hotkey-dos-compatibilty-lcd- panel-virtual-device-etc bullshit! Go into graphics and stay there or don't go in at all. And if you have to read the state of the hardware back, in anything other than a diagnostic, that is a bad software system design.
Now the rest of it:
>Could we PLEASE stop this framebuffer nonsense? > >It's not going to happen. There are LOTS of cards that don't even _have_ >framebuffers (and they are usually the really high-end ones, the ones >that people will want support for in the future).
I feel compelled to reply that I have made my living as a display driver programmer for the past six or seven years and have worked on a lot of high-end PC video cards as a 3rd-party contractor. All the high-end cards that I have worked on have frame buffers. Why? Because you can only provide hardware acceleration for a certain subset of common and important operations. So when your Windows driver, NT driver, OS/2 driver, or X server (I've done all of these and more) wants to do something weird (e.g., arbitrary-size stipple on X, maskblt on NT, background transparency on OS/2), the easiest and fastest way to do it is with direct frame buffer access instead of trying to break a complex operation into things the card can do in hardware -- which may be impossible, in which case you're stuck with bltting a rectangle from frame buffer to memory, drawing to it, and bltting it back.
>Finally, IO port acccess is not at all less critical. As far as I know, >there are _no_ PC graphics cards that you can really program without >using IO ports. Stuff like colourmaps etc are almost invariably done with >IO port access on PC's.
I/O port access is not critical because only suboptimal cards use I/O ports anyway. The best way to make I/O ports faster is not to use them. Since at least the early 90s, good cards use memory mapping for everything except maybe some init functions. Including setting the palette and cursor. This is because I/O cycles are SLOW and the I/O instructions tie up dedicated registers so the code is worse -- consider the case of setting a register to a constant with I/O vs. memory mapping. Some examples of such chips: Weitek P9000/P9100, #9 Imagine-128, Appian 98032 (4 bytes of I/O space to set the memory-mapping in the pre-PCI days) which led to the Cirrus Logic Laguna, and many others.
Anybody using a card that is I/O mapped should not complain about the performance -- they are using intrinsically suboptimal hardware and there is no reason to make hacks for it when they could be using good hardware and leave programmers to work on projects which advance the state of the art instead of holding it back.
>Trust me, we're not talking VGA here. We're talking _high_ end graphics, >that don't have frame-buffers because that interferes with the normal >mode of operations (painting the screen) for no real good reason (the >actual drawing is then done using screen commands and/or DMA to the card >to fill a area with data).
Are you thinking of the XGA? That's the only card I can think of that uses DMA. There may have been others, I don't read the specs for every chip ever made, just the ones I haved worked on which is pretty much a who's-who list, and one or two I have evaluated. Anyway, IBM did all kinds of weird stuff in HW to deal with physical to virtual mapping and it ended up not working anyway.
>> That is good enough because the virtual memory hardware can be >> used to make it appear to be linear. The process does not need >> port IO to move the window. > >Wrong. You _can_ do it that way, but if you do that then you're shooting >yourself in the foot. Mainly because it's horrible for performance.
It's perfectly usuable. I kind of stopped doing display drivers because things got to the point where I thought things were fast enough. Nevermind that the marketing guys are still pressuring engineers to make things go faster -- the benchmark numbers are now +infinity as far as I care, meaning that once things are fast enough, you don't notice if they are faster. I've done some god-awful slow drivers that do everything in software a couple times over (I had my reasons) and they are still totally usable. Memory-mapping tricks + software drawing are certainly fast enough and require only a very small amount of chip-dependent code once the OS or whatever is set up for it. But I don't endorse such a thing because it's just a hack for bad hardware.
Tom.
|  |