Messages in this thread |  | | Date | Tue, 28 May 1996 21:49:51 -0400 | From | "Theodore Y. Ts'o" <> | Subject | Re: CONFIG_RANDOM (compromise?) |
| |
From: Albert Cahalan <albert@ccs.neu.edu> Date: Sun, 26 May 1996 17:21:50 -0400 (EDT)
>>>> There's also an argument that the random number generator causes >>>> unnecessary overhead for people who never use it, and should therefore >>>> be made an option. However, I have not yet seen anyone post >>>> benchmarks or any other form of statistic which might demonstrate that >>>> such overhead exists.
Someone posted loss of about 5% for read performance.
That number appears to be complete bullsh*t. First of all, it was only 4.5%, and it amounted to the difference between 7.47MB/sec and 7.13MB/sec --- or .34 MB/sec.
I've just done my own tests using hdparm -t, and have noticed similar variances (+/- 10%) between successive runs using the same kernel. This only goes to show that there are lies, d*mn lies, statistics... and then there are benchmarks.
I've done by own cycle counting benchmark. Each call to mix in randomness only takes 150 to 1000 cycles, depending on memory caching effects (this is on a laptop without a lot of cache memory). On a 75MHz Pentium, this translates to 2 to 13 microseconds, with the average being 8 microseconds. This is *not* a lot of time --- consider that the default Linux clock tick is 0.01 seconds, and we're talking about 0.000008 seconds.
When you consider that during the hdparm benchmark, the process is blocked doing I/O most of the time, claims of significant I/O degredation are extremely suspicious. My own trials using hdparm -t, using multiple runs to average out spikes caused by uneveness of the benchmark, show no discernable difference on my laptop between a system that has /dev/random and one that doesn't.
Next: It might be desirable to collect random events in the kernel. It is _not_ desirable to store and mix them with a complicated hash function in the kernel.
Take a closer look at the code. You will see that it is not using a complicated hash function to mix in the random events. This was done by design, so that it wouldn't take much CPU cycles to mix in the randomness, while still doing a good job.
It might be possible to move the complicated hash function (which is used when you actually read from /dev/random) out to a user-mode daemon, but you wouldn't be able to move out to a library, because of user to user security concerns. (A random user program can't be allowed to directly see random pool, since that would completely compromise the security of the random pool.) The real problem with a user-mode daemon, though, is that it would signicantly reduce the chances that a user would have their system correctly configured to be able to use /dev/random, and as a result application programmers would be much less likely to use it. Security must be easy to use, or people won't use it.
I've considered collecting the random events during interrupt time and only placing them in a ring buffer, and then only mixing them into the pool using a bottom-half driver. It's not clear that it'll save enough clock cycles to actually be worth it, though. I'll do some benchmark tests and see how they work out.
Does anyone else think the kernel has too many checksum/CRC/hash functions? We need one for Elf, one(?) for networking, and one for ftape. The decompression code does not need its own CRC. Neither does the /dev/random.
The networking code will eventually need to use both SHA and MD5; the protocols demand it. Similarily, the decompression code needs to use CRC, since that's what the gzip format requires. During the Berlin conference, I talked to Alan about separating out the hash functions which are currently in /dev/random into the lib directory of the kernel during the 2.1 kernel development, since it's clear that future security networking protocols will also require access to the SHA and MD5 functions.
It's already the case that the CRC functions used by the boot-time and the ramdisk decompression code have been consolidated into one place and placed in the lib directory for ease of code maintenance. That's work I did a few months ago. So we are moving towards consolicating the code where it is possible.
- Ted
P.S. In the latest incarnation, the /dev/random driver now only takes 10k of memory. I've done some further optimization of its memory footprint, mostly by eliminating inline functions where they weren't necessary. The driver is now also being used by the TCP layer to generate secure TCP sequence numbers. These patches will hopefully be going into pre-2.0.9, although I'll be asking Alan to vet the TCP changes first. They're pretty straight forward, though.
|  |