lkml.org 
[lkml]   [1998]   [Nov]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: Linux-asm (was A patch for linux 2.1.127)
Date
You both have argued very reasonably and correctly, I just want to add my
"weight" to a point that is maybe missed or underemphasized here ...

"A month of sundays ago Richard B. Johnson wrote:"
>
> On Sun, 15 Nov 1998, Rogier Wolff wrote:
>
> > In specific cases, you can achieve huge speedups of C code in
> > assembly. What you do have to take into account however is that
> > Assembly is much harder to maintain (*), leading to less efficient
> > algorithms in the long run. That's a benefit that shouldn't be
> > overlooked.

I second this view of Rogier's. The counterargument is one put forward
by David Miller in a slightly different thread, actually: that if you
can't hack it (assembler/C) at this level, get out of the kitchen and
leave it to people who can hack it and they'll do it well. I.e.
excellence of performance _now_ is a win in terms of prestige and we
ought not to dumb down kernel source for the masses.

(Dave's comments were more precisely that he has backup coders able to
take over at the appropriate level of sophistication in case someone
knocks him off, but I'm abstracting from that and other comments that
he's in favour of performance over maintainability).

I only wish to add that putting in assembler reduces by a factor of 100
the number of eyes that are capable of reviewing the code.

That attacks Linus's argument for open source, that "to enough pairs of
eyes, every bug is visible" (or whatever he actually said - I'm not
going to look it up!). That is an attack on one of the pillars of free
source, that bugs are found and repaired more quickly.

That's enough it itself for me to require that assembler somewhere should
produce a speed increase of at least a factor of 50% _overall_ (_all_ the
kernel) before it gets allowed in. 30% faster cuts no ice with me. Machines
run 30% faster every two months, practically.

> > should implement everything in C before you start coding in assembly.
> > Just to make sure that the algorithms are correct before you jump into
> > the deep assembly "pool".

Another very important point ... a better algorithm will have more
effect than better coding, in my experience (a silly example, I coded
the RSA stuff used by the internet banks in this country and in a
neighbouring country, and my best efforts at maintaining cache coherency
and so on maybe got me 30% faster, and I went to 900% faster when I
decided to start guessing the results of long divisions and correcting
them afterwards instead of actually worrying about the implementation
method - no, I've no idea what the current comparison is with gnu; their
approach was essentially to write different assembler for every
architecture, and that'll outdate itself in no time, so it never got
considered as a candidate by the banks or their software auditors - ibm, I
think).

> > I once worked on a hard-real-time application. My predecessors had
> > started writing everything in assembly. I ended up re-coding
> > everything in C because their implementation was buggy. All those
> > routines became twice as slow in C than their Assembly counterparts.

About what I'd guess.

> > There was one exeption: One routine was 35 times slower in C than in
> > assembly. Moreover, the optimized assembly took about 50% of the
> > time-budget. Another thing about this routine: My predecessors had

A common experience. More like 90% of the time spent on 10% of the code is
usual. That's why software companies try - or should try - to identify
and avoid complex code.

> > That leaves about 2000 lines of assembly among about 1.2M lines of
> > code in the Linux kernel. The important thing is that Linux will be

Hmm. It's a little dangerous.

> > somewhere a few years from now. If you start rewriting stuff in
> > assembly, that will be nice for a year or so (you get the added
> > performance), but after that someone will need to fix some obscure
> > bug (and can't find it in the assembly mess).

Yes.

> > When IS it "allowed" to do assembly recoding for performance reasons?
> > I'd say that if you can shave off about 30% of a real-life application
> > (Not just a benchmark that does the one thing you can optimize over

I emphasize that the improvement must be visible _overall_.

> > and over again) then it is worth considering.... If you have a
> > real-time application that doesn't meet its timing requirements, you
> > can start optimizing if you can shave off 10% of the total time.

Very true.

> > In short, please, don't advocate rewriting (parts of) the linux kernel
> > in assembly for performance reasons.

.. alone, without considering the global impact.

> This is an excellent response and I appreciate it. However, my proposal
> is to substitute (as a compile-time option) complete procedures
> (new files) written in assembly. The current setup makes for difficult
> maintainability where inline asm is mixed with 'C' code. To make
> it worthwhile, the speed increase must more than compensate for the
> increased call overhead. You find a bug, the asm source is not used
> until somebody fixes it.

I presume you mean: "not used again because it'll be ifdefed away until
somebody fixes it". How will we know when it's fixed? "Fixes" are as
likely as not to introduce a new bug for every one fixed, in general, in
software. Not so many people can check your work if it is in assembler, and
they ARE more likely to make a mistake too if it is written in assembler.
By definition you are only considering assemblerizing vital parts of the
kernel, and that makes me shudder as a concept!

> Cheers,
> Dick Johnson


Peter

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:45    [W:0.086 / U:0.180 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site