Messages in this thread |  | | Date | Sat, 4 Jan 1997 13:28:49 +0200 (SAT) | From | Mike Kilburn <> | Subject | Re: too much untested code in new kernels (fwd) |
| |
On Fri, 3 Jan 1997, Jim Nance wrote:
> gets extensively tested is for Linus to releas 2.0.31. Thus it would be > foolish to ftp to ftp.cs.helsinki.fi, get 2.0.31, and assume that it > was a well tested production quality kernel. The RedHat/Debian/Etc. > people will move to 2.0.31 if and when they feel it is stable enough.
At present users (knowing or otherwise) are used as test sites for "stable" kernels. In order to test a kernel one would want to see it run a busy news system or 128 port term-server or massive multiuser database system under a variety of hardware/software conditions or BGP4 on a big network or .... you get the idea. Ideally there would be a "Linux QC Project" where people who have access to the required hardware/software/users can do these tests. But there is no such project at present, Linux does not have a QC department and I doubt distribution makers will spend the money required to do this. The people who can run the kernel under the right conditions to really test it are for now the unwitting users that think it is stable. The only way the kernel gets fully tested is when people *think* it is stable. Linux has done incredibly well under the current system but any real improvement will require a commitment of time/money/hardware/etc by people.
|  |