lkml.org 
[lkml]   [1999]   [Feb]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: Kernel interface changes (was Re: cdrecord problems on
On Thu, Feb 04, 1999 at 09:27:55PM +0300, Khimenko Victor wrote:
> In <19990204091224.A31267@anjala.mit.edu> Arvind Sankar (arvinds@mit.edu) wrote:
> > On Thu, Feb 04, 1999 at 09:07:27AM -0700, yodaiken@chelm.cs.nmt.edu wrote:
> >> > Unfortunately, I think it is a problem you have to take up and deal with.
> >> > Recompiling sources for entire server setups in a live production
> >>
> >> So you use the 2.0 version until a 2.2 version stabilizes. The problem
> >> really is that the Linux unreliable development kernel is so good that
> >> people actually want to run production systems on it, and then complain when
> >> it does not stay stable.
>
> > 2.2 is supposed to _be_ stable, not gradually stabilize. That's what 2.1/2.3 are
> > for.
>
> Unfortunatelly it's not possible. Joe average will not even try "development"
> kernel and some errors could not be found without A LOT OF users.

if it's not possible, why do we have a piece of docs that claims there is any
difference between 2.1/2.2. For that matter, why was 2.2.0 not 2.1.133? Besides,
the idea of debugging code by running it till it dies is inherently flawed.
[The old adage: if architects built buildings like programmers write programs, the
first woodpecker that came along would destroy civilization. Fortunately, that's not
yet true :-)]

>
> >>
> >> > Besides, binaries are still the best way to get up and running as fast as
> >> > possible. Waiting to bring up a replacement server because it's still
> >>
> >> Really? And what about waiting until the binary only patches get shipped by
> >> the vendor. For those of use with experience on binary only systems, this
> >> complaint is completely mysterious.
>
> > tell that to redhat/debian whoever.
>
> Huh. And why redhat/debian so struggle for "reproducible builds" (unlike
> Slackware :-) ? This is way it's trivial to recreate new binary package when
> needed with just `rpm --rebuild` (or whatever in Debian?). Of course it's not
> only goal but `rpm --rebuild` is there for purpose !

I was pointing out that the vast majority of ur joe average's actually use
binary distributions built for them by somebody else. They have neither the
knowledge nor the inclination to build their own. That is how redhat makes
money. besides, rpm --rebuild sucks. can't split it up intelligently into
little stages. For eg, ncurses-4.2 does not rebuild cleanly after installing
glibc-2.1. You have to be more intelligent about it. Which is the problem. You
can't expect everybody to keep figuring out what went wrong where after they
rebuilt their supposedly stable kernel. If somebody finds that fun, they run
the development kernels.

[ The ncurses was just a random example. I don't blame anybody for it. I fully
realize that glibc-2.1 is still a development version. ]

>
> Looks like few overlooked problems in 2.0.x kernels started all this fuzz :-((
> Even NT where binary compatibility is fetish doing it (after NT 4.0 and
> NT 4.0 SP3 are not 100% binary-compatible!) why you thing Linux is exception ?

Look, I don't run NT because it sucks. I don't want linux descending to that level.

>
> Or they will outsource "stuff like this" or stuff will be thrown out in dust
> and OSS replacement will be done eventually (even if this will not happens
> this year or next year :-) We here have simple rule: "no software in kernel
> mode or under user root on servers without available sources" and I think that
> this trend will be expanding in the future. [VERY] Slowly but inevitable...

duh? The MIT athena project has been around since the '80s. It's not going to go
away anytime soon. Why should it? It's far easier to dump linux in favour of netbsd,
which is also `OSS', as you put it.
Besides, what would you do with all those sources? You have any idea how much time
it takes to rebuild a server? Or worse, try to figure out which bits need rebuilding
and which don't? And in an environment where the maximum downtime acceptable is 0?

>
> Why you think so ? IMO "no drivers" or "no AFS support" or "no USB support"
> is WAY better then "binary-only drivers", "AFS support via binary-module"
> and "USB with precompiled kernel blob".
>
>

duh? You seriously believe that? nothing is better than something?

-- arvind

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:50    [W:1.685 / U:0.016 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site