lkml.org 
[lkml]   [1996]   [Jun]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: real kernel bloat
Alan Cox writes:
> > almost anything. It is true that DEC's kernel is several Megs in size,
> > but don't forget it is capable of much more than Linux is, and,
> > arguably, will ever be. The lacking capabilities are of no concern
> > to most people, however the truth remains: Digital UNIX is a better
> > multitasking, multiuser OS than Linux. I am no DEC's or Sun's fan,
>
> On what measurements. which facilities, what hardware size.

Well, my personal experience has been, that Linux' performance
deteriorates pretty rapidly as the load increases (be it one user
running more jobs or more users being logged in). You, probably,
know, that Digital Unix 4.0 is capable of supporting 4000 users
(so they claim). It has got fail over features (clustering), it
scales to more processors than Linux does, it's got logical
volume manager, journaling file system, and the list goes on
and on. And it knew how to do those things 5 years ago.

As of this moment Linux can't do any of these. Sure,
it will, eventually. But this is a catching up mode. When Linux
catches up with DEC on these or other features, DEC will have
something else up their sleeves. I hope I am wrong, but so
far, my impression has been, that very little innovation comes from
the Linux camp: all we do is trying to outperform others in
things they had been able to do years before Linux' ( or even
Linus' :-) ) conception. And they honestly don't care. Who
cares that OSF's kernel is 8 MB if AlphaStation comes with
64 MB minimum, and memory prices keep going down? If you
were Digital, wouldn't you rather concentrate on the features your
customers demand, than optimizing every line of code in the
kernel?

> > but let's be honest - there is no way that a group of people, most
> > of who hold daytime jobs, can compete with the multibillion
> > corporations, which employ some of the best minds on this planet.
>
> I beg to differ. And every benchmark we have floating around says just who
> is winning.

See above. In addition to that, let me add, that user's perspective
of what OS is, is quite different from yours, Linux major contributor.
As a user, I don't run kernel tcp/ip benchmarks, I run ftp instead.
And what I see is that ftping from Solaris 2.4/Sparc2 to AlphaStation
on a different subnet delivers 660KB/s on average. The Linux box,
which hardware wise should beat the crap out of an ancient Sparc
delivers 250 KB/s. I can't tell you, what the deal is. It might be,
that Linux is not very good, when there are collisions on the
ether. Or maybe Linux' ftp server is not too good. I don't know.
But the latter brings up another point: in many cases optimizing an
application can do a whole lot more than optimizing the operating
system. In the case of commercial Unices, that is precisely,
what is going on. Unlike Linux' distributions, where only the
kernel is being heavily optimized, and other parts being rather
generic, Unix vendors sell complete optimized solution, which
might well be the reason for the better "real life" results,
I mentioned above. They may loose on some rather artificial kernel
benchmarks, but their products may do a better job OVERALL.

>You might want to look at the kind of people Linux hackers are
> working for in their non spare time, and the sort of people who are hiring
> them ...

That's beyond the point. Linux developers may be as bright or brighter
than those who work for DEC or Sun, or whoever. However, they are
still very much disadvantaged by the simple num_of_people *\
time_avail * money arithmetic. The only, and I must admit a huge
advantage, that Linux enjoys is the number of testers available,
and the speed at which bug fixes can be implemented and re-tested.
I suspect this is the only reason, why Linux has been able to do so
much of catching up in such a short period of time.

>The place stuff like OSF/1 should still win is going to 12
> processor .5Gb Alpha's. The really big stuff, and thats primarily because we
> don't have any of those handy for a Linux port and to do all the tuning.

That's precisely what they are after. It appears to me, that neither
Sun nor Dec are after mass market. They are keeping to themselves
the workstation users by spewing out a better hardware every once
in a while, and trying to expand in their "really big" stuff.

>
> > Now, assuming that you can buy Solaris x86 + Sun's Workshop (the best
> > C++ implementation, debugger with the support for multi-threading,
> > multiprocessor tools) for around say $300, and you have to pay
> > similar amount for Linux distribution (just stretch your
> > imagination), what would you rather have?
>
> Given the performance, the hideous TCP problems with Solaris, the fact Linux
> is faster, has source, has more useful tools I think most of us here know.
> My SMP box is happily doing bbthreaded stuff, PvmPovray etc. (PVMPovray is
> funky btw..)

I think both of us show some bias. BTW, about useful tools. Can you
recommend a nice C++ compiler for Linux ? Don't tell me that g++ is
nice - it will be, when it supports ANSI C++ working papers. How
about P6 specific optimizations? When are they coming? Wouldn't
they do more for the kernel performance, than all the optimizing in
the next 10 years?

I tried to illustrate two points:
- viewed across the board, free software can't keep up with the
commercial one;
- kernels are only as useful, as the applications running on top of
them. If I need to write a C++ application, which has to be portable
across several platforms with a minimum effort, I want the
standards compliant compiler. It will save me a lot of time,
much more time, in fact, than the small time savings, coming from
a better performance of some kernel on some tasks.

I guess, I should've made it clearer, that I treat the term "OS", as
an environment to do something useful, not just a kernel, which can be
bench-marked favorably. Despite Linux has done wonders in this respect,
I am afraid, that its usability is never going to be as high, as the
one of the commercial Unices, just because there are much
more people, who are ready to write software for money, as opposed
to writing it as a means of self expression. And this suggests the way
how to really improve Linux' chances on BIG success - make it EASY
for the commercial entities to write for Linux. It means supporting
Spec 1170, instead of POSIX. The reason for that is, that people,
who sell their software into commercial Unix marketplace, should be
able to simply recompile their applications for Linux, without
an extensive rewrite, which is a strong possibility wright now.
It does not matter, if SysV implementation of something is silly
or not. It might, well, be. But it's what people are used to using, and
if correcting this silliness means the headaches for the commercial
software developers, it should be avoided. Or, else, provide
some Spec 1170 compatibility library. And don't worry about
performance that much - it should only be good enough. 10%
improvement is not worth your effort, those 10% can be recovered
elsewhere much easier, as an added functionality for example.
I read your answer to Alexey regarding STREAMS. I am sure, you
are right that it's slower than sockets, but it does not matter.
If someone recompiles or starts developing an application for
Linux because of this feature, it will benefit Linux much more
than a better benchmarks with no applications to use.

Let me sum things up: Linux has not conquered the world yet,
but keep up a great job, guys.

Alex Krimkevich.

P.S. I just wanted to re-instill the sense of reality on this
mailing list :-)


\
 
 \ /
  Last update: 2005-03-22 13:37    [W:0.198 / U:0.284 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site