lkml.org 
[lkml]   [1998]   [Mar]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    SubjectRE: GGI modularity

    I think the tone of the discussion in re: GGI is that the starting
    point needs to be a clear model of device abstraction, as opposed
    to just diving into the code.

    Linus' point would most likely have been that input that most people
    use works now, and it's the output device interface that needs the
    abstraction layer most, even if a general input abstraction interface
    would be handy for people with exotic or multiple input devices to attach
    to a single graphics state.

    The starting point is the intersection of all the parts, an abstract 2-D
    canvas with a unique id where the kernel can represent state changes for
    any given graphic context. This abstraction needs to be nestable.

    The second requirement would be an abstract output interface where the
    kernel can register callbacks to the output operations for specific
    output devices (think vfs), thus attaching some user-chosen set of
    displays or other lower-level renderers to a specific abstract canvas.

    The third requirement would be an abstract input interface which the
    kernel can attach to canvas instances as a source of inputs that generate
    state changes in a particular abstract canvas.

    If your input is a traditional keyboard, that's a set of callbacks from
    the abstract input interface. GGI doesn't have to muck with the current
    tty code to implement this. If your canvas is a 2-d view onto a 3-d space,
    the 3-d model exists in user-space, but the kernel is still processing
    inputs to and state changes in a 2-d space, because that's what it will
    output to a 2-d-in-hardware graphics device.

    If the output is to a terminal or virtual console, mapping state changes
    in the 2-d space to a row-column model is a set of callbacks from the
    virtual output device layer to a filter in between the abstract canvas and
    an output device driver. Mapping mouse click and other events to a
    row-column model for mouse input to a console model of an abstract canvas
    is just an input filter. It's upstream from any input hardware protocols.
    It could be an attribute of canvas state, ie the input filter is embedded
    in what the kernel thinks of as the state of a particular abstract canvas.

    Some canvases have rows and columns, some don't.

    So the scalable model doesn't know what a tty is. It doesn't know what a
    framebuffer is. It only knows what a 2-d canvas is, has a state machine
    for modelling state changes in that canvas, and abstract input and output
    interfaces. What channels the inputs and outputs are on are just
    attributes of that state machine, but those channels have to be abstracted
    before the kernel updates the state machine and unabstracted on the way
    out. Vfs is the best model for this sort of scalable kernel interface
    that I've seen, and exactly the sort of design needed to allow different
    parts of the code to mature independently of each other as breakthroughs
    are made in solving problems in the code.

    Everyone knows it's needed, but coding should come after an interface
    design that doesn't break part b when part c changes.

    Regards, Clayton Weaver cgweav@eskimo.com (Seattle)


    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.rutgers.edu

    \
     
     \ /
      Last update: 2005-03-22 13:41    [W:0.047 / U:118.668 seconds]
    ©2003-2016 Jasper Spaans. hosted at Digital OceanAdvertise on this site