Messages in this thread |  | | Date | Thu, 23 May 1996 08:50:27 -0700 | From | (Dan Weiskopf) | Subject | Re: CONFIG_RANDOM (compromise?) |
| |
I hate to add to this debate, since it appears to be getting nowhere fast, but I think it's time for a brief review of the major arguments against keeping /dev/random in the kernel, and why none of them are really persuasive.
>>>>> "Martin" == Martin Dalecki <dalecki@namu23.Num.Math.Uni-Goettingen.de> writes: [deletia]
Martin> 0.4% here 0.4% there and we will soon see a kernel which Martin> doesn't fit anymore, even when compressed, onto a 3.5inch Martin> disk :-).
This is smiley'd for our protection, of course, but it's still an often-cited reason that /dev/random ought to be made optional: kernel bloat. The argument is roughly that all of these little things we want to keep in the kernel will eventually result in a 3.6mb (compressed) kernel that can only boot on an UltraSparc with 512mb of RAM. But it's just not so; some slopes are not slippery, and this is one of them. The core kernel developers are already quite sensitive to issues of unnecessary code inflation and guard against it as often as possible. The argument that this 16kb addition will lead by a natural progression to megabytes of worthless code just can't go through, although it's a common rhetorical trope.
I find the "people on 386-16's will have to wait longer to compile" argument unpersuasive as well. This isn't because I have anything against the people who are forced to use low-end hardware for Linux; it used to take me upwards of two hours to build kernels on a 486-33 with 4mb of RAM, and every new feature cost more in terms of time. (I continued to build new kernels almost daily.) People who are using such hardware are simply used to long builds. If it's going to take overnight to make a kernel, a few extra minutes for the random generator just won't hurt all that much by comparison. So I guess that here I don't see the burden as being that great.
There's also an argument that the random number generator causes unnecessary overhead for people who never use it, and should therefore be made an option. However, I have not yet seen anyone post benchmarks or any other form of statistic which might demonstrate that such overhead exists. I think the burden of proof is on their shoulders in this instance. I, at least, would certainly be interested in seeing if there is measurable slowdown.
Finally, there are technical arguments designed to show that /dev/random's functionality can be gotten via cheaper (in terms of processor time, or some other measure) methods. I'm hardly aware of all the technical issues at play when it comes to cryptography, but Ted appears convinced that no cheaper method can be found that won't weaken the security of using /dev/random. On this, I just have to defer to authority. ;-)
I've probably missed some of the arguments that have come up in this debate so far, but these strike me as the major ones. None of them look good yet, although maybe benchmarks demonstrating real performance hits could be promising. To those who have argued against /dev/random, I don't mean to trivialize your positions; I only intended to say why others might not be immediately won over. And in any case, it looks like Ted and Linus are the ones to convince here, and Linus (hallowed be his name) has remained silent on this issue.
Maybe this should be moved to linux-randomness (along with the Penguin Debates).
-- Dan Weiskopf | "It's easier to say `I love you' than `Yours sincerely', debaser@netcom.com | I suppose." -- Elvis Costello, "Big Sister's Clothes" Rorschach@EFNet.IRC | Department of Philosophy, Brown University.
|  |