lkml.org 
[lkml]   [1998]   [Mar]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRe: GGI and cli/sti in X


    On Sat, 28 Mar 1998, Vagn Scott wrote:
    > Linus Torvalds wrote:
    >
    > > The small root-owned process then stays around, does a "wait()" on the
    > > child (the X server) and when the X server exits it restores the screen
    > > and everything is hunky dory.
    >
    > This, of course, is the challenge.
    > If it can be done then the problem is solved.

    It _can_ be done. The small root process would be _part_ of the XF86
    distribution - it would do all the mode switching for the X server. As
    such it can do _anything_ the current X server does (which obviously
    includes switching back to text mode).

    XFree86 already had support for setting the video card clock speed with an
    external program - this just takes it one step further, to set the whole
    mode up.

    > EXAMPLE
    >
    > shazam GoodCard xserver-GOOD -bpp 16 &

    No. Example:

    startx

    and the X server does all of this. I didn't imply that the small program
    would be _separate_ from the X server in any way: it would be a separate
    process for security reasons, but it would be part of X.

    > Works well for GoodCard and VeryGoodCard2000.
    > Fails miserable on any card that can be set into
    > a state such that to leave that state you must
    > know what state it is in. For such cards the
    > knowledge that it was once in a particular state
    > is not useful.
    >
    > The cards for which shazam is not useful include:
    > S3-yada-yada
    > ATI-blah-blah
    > some others

    I'm going to ignore the rest of this thread, because I get responses like
    this from people who haven't thought the problem through.

    OF COURSE the small program has to keep track of the video mode. That is a
    given. What's so hard about that?

    For example, when the user presses "ctrl-alt-+" to get to another video
    mode, the "real" X server would just send a signal back to its parent
    telling the parent to switch to the next higher resolution. The X server
    proper would never need to worry about the thing.

    What's so hard to understand? The fact is that NONE of this actually
    requires any kernel help.

    The things that might require kernel help are things like DMA and
    interrupt access, and there it makes perfect sense. I want to re-iterate
    that I'm not against having the kernel help the X server as required.

    What I AM against is these stupid people that think that the "kill -9 X"
    argument is worth anything. It is not - because it is easily fixed by
    having a separate part that cleans up after the X server. Go back and read
    my mail.

    Anybody who thinks that it is easier to do things like mode switching in
    kernel mode is very seriously mistaken. Kernel programming is a LOT harder
    than programming a trusted deamon, and it is a lot easier to get the
    kernel part wrong. And when the kernel part is wrong, the end result is
    something much worse than just a graphical screen that you can't get to do
    anything.

    Some people claim that the kernel part of GGI is very small compared to X,
    and thus easy to prove right. So what? If it is so easy to prove right it
    _still_ should be done in user mode if at all possible. And I have just
    told you exactly _how_ it is possible.

    So go, and sin no more.

    Linus


    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.rutgers.edu

    \
     
     \ /
      Last update: 2005-03-22 13:41    [W:0.030 / U:3.532 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site