Messages in this thread |  | | Date | Sun, 2 Jun 1996 15:42:17 -0400 (EDT) | From | Ingo Molnar <> | Subject | Re: SVGA kernel chipset drivers. |
| |
the following discussion is only true if GX >register< access times are an issue. And judjung from the X server using iopl(), it's an issue ...
On Sun, 2 Jun 1996, Linus Torvalds wrote:
[ "priviledge dropping" SVGA server implementation in user space ]
> Essentially, this already sets up the protection domains. The server is > protected, yet the fork() inheritance is able to give the graphics > program all the resources it needs, and nothing more.
presuming that a user program is allowed to have all priviledges. What we are talking about is an interface to the graphics hardware. The direct method is the way X does it: it gets the whole resource, on "bare metal" level. On the other hand, the most indirect way is having everything that does a single outb() or similar operation, in the kernel.
The problem is not the framebuffer. The problem is those zillion VGA/SVGA/XGA/CGA/EGA/S3/Mach32/64/etc registers that all need to be put straight. In userspace either we use iopl() to get >all< registers on the machine, or we use ioperm(), to get some registers (but we have to take the performance hit on a >per register basis<.
With a "partly in the kernel" approach, the performance hit is per operation. I dont know all GX cards that well that i could decide which performance hit is bigger. But what if we dont even want to give register access to a user program. I'm not trying to be destructive or anything, just imagine that some cards are capable of damaging some older monitors with too high frequencies. Some cards burn if they get programmed to a too high clock rate. I think some cards cant be recovered to text-mode, if the access to the registers is not traced in a secure way. These problems can't be solved with the ioperm() call.
the X way is a very good solution, once all clients are communicating through a high-level graphics protocol (read: X protocol). No problem taking the IPC hit, unix-domain sockets are fast as hell anyways. But the ususal SVGA programs have much lower level operations (changing the palette using some registers that btw. might be right next to the clock generator registers) to make an IPC viable.
In this sense we can talk about X as the "X protocol kernel".
I know the whole concept of PC graphics cards is broken beyond recognition, but at least we have to understand the problems before talking about the solution :) [i'm afraid that it is me who should start doing so]
-- mingo
ps. i like the user space solution much better, since it's much more robust. but the problem is that for >any< >safe< interface we want to impelement we have two choices currently:
- using some kind of IPC: TLB flush and >5 usecs switch time - using a kernel trap: 2.5 usecs enter/exit time
there are other possibilities:
- maybe using call gates to ring 2 or ring 1 ... dunno how, and it breaks on other platforms i guess
|  |