lkml.org 
[lkml]   [1999]   [Dec]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
    /
    Date
    From
    Subject[Re: Linux headed for disaster?] -- I don't think so.
    On Sun, 5 Dec 1999, Kendall Bennett wrote:

    > Every single problem that has been mentioned as reasons not to
    > implment Binary Portable modules for the Linux kernel is solvable. In
    > fact there are *lots* of incredibly sound reasons for why the Linux
    > kernel should be re-worked to support binary loadable modules that
    > are portable between kernel versions, *even* for Open Source drivers,
    > some of which are:
    >
    > 1. A later version of a kernel may well have introduced new bugs
    > into a previously stable driver. Solving this problem currently
    > requires the user to revert back to an older kernel revision. Doing
    > so may not be desireable because the new kernel version may have
    > updates and fixes that are desired. With binary portable modules, the
    > module a previous kernel that did work could be used in place without
    > problems (ie: it is expected to work if unless there is an interface
    > change).

    Take a look at this point from the opposite side. What if I am the only
    user of a piece of hardware that's currently reporting a bug? Well, in
    most cases it isn't interesting for a vendor to fix a rare problem until
    it has been reported by a number of users. Consider also that in many
    cases, after the development is "complete", the developers have moved on
    (be it to other products or other jobs) and the company no longer has the
    option of performing tweaks on a driver without incurring major costs.

    How do you combat these problems with binary only drivers? You can't.
    Why should we encourage binary only drivers? We shouldn't.

    > 2. Binary portability requires more solid and clearly defined
    > interfaces between the kernel internals and the modules themselves.
    > By requiring that there be a clear separation or 'firewall' between
    > drivers and the kernel internals, you can more easily avoid the
    > classic problems of coupling where changes in some other part of the
    > code affect other code that should not be affected. Specifically it
    > makes it impossible for a driver to implement a 'quick hack' solution
    > by accessing the internals of some other driver or the kenerl
    > directly. However the *only* way to enforce this is to design device
    > type specific binary API's, and *require* that the only way a device
    > driver can talk to the kernel is via these API's.

    Sure, better defining APIs is a Good Thing, but how we possibly commit to
    etching an API in stone when we don't know what additional needs will
    exist two years from now? The Linux kernel is an evolving body of
    software that responds to the needs and desires of its developers.
    Usually when APIs are changed, it's for the better and it results in
    cleaner code and a better defined interface -- are you suggesting that we
    should cease improving the internals of the kernel? Remember that much
    code is inlined into drivers for performance reasons, one of Linux's
    major accomplishments.

    > 3. Binary portability means much less regression testing is required
    > for new kernel versions. If the driver itself does not get re-
    > compiled, it does not need to be thoroughly re-tested. If the case of
    > the Linux monolithic kernel approach, every driver is compiled into
    > each new kernel version. How do you *really* know that a driver is
    > functioning properly when a final release of 2.2.100 is made, unless
    > *every* single device that is supported is properly tested with that
    > particular version?

    This is a false statement. Changes between versions result in numerous
    side effects that can trigger latent bugs in drivers. Timing changes,
    memory layout changes so that a previously harmless stray pointer can now
    results in a crash. Also, binary compatibility is rarely achived -- take
    a look at M$'s own products: can you use drivers from Windows 95 with
    Windows NT? Nope. Drivers always require updating. Requiring source
    simply ensures that it is *possible* for third parties to perform the
    update when the creator is no longer available or interested.

    ...
    > Lots of stuff available for Linux outside of the Linux kernel is not
    > Open Source. A lot of stuff is. The stuff that is, is Open Source
    > because it makes sense for it to be Open Source, not because the
    > developers were forced to make it Open Source. Open Source software
    > will be successful because of the power that opening the source code
    > provides. The power that 'With enough eyes, all bugs are shallow' as
    > Linus once said. Has Linus forgotten the reasons why Linux is where
    > it is today? Instead he appears content to wield the power of
    > dictator over the Linux kernel sources to force vendors to do things
    > his way.

    If someone wishes to keep details of their product proprietary, then they
    should provide a well defined interface to said product. It *is* possible
    to ship a driver that consists of a source portion and a binary portion,
    but in real life, it is rarely useful. Pushing the hardware vendor's
    inability to define an interface into our overhead just isn't something I
    feel like accepting because there *are* hardware vendors who create good
    products and document the interfaces to them. So long as I have choice,
    I'll use a driver I can hack over one that's closed.

    -ben


    -
    To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
    the body of a message to majordomo@vger.rutgers.edu
    Please read the FAQ at http://www.tux.org/lkml/

    \
     
     \ /
      Last update: 2005-03-22 13:55    [W:5.530 / U:0.008 seconds]
    ©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site