Messages in this thread |  | | From | devnull@spaans ... | Date | Tue, 12 Sep 2000 07:44:49 -0400 | Subject | Re: Using Yarrow in /dev/random |
| |
Date: Mon, 11 Sep 2000 13:08:59 +0000 From: Pravir Chandra <pchandra@rstcorp.com>
I've been working to change the implementation of /dev/random over to the Yarrow-160a algorithm created by Bruce Schneier and John Kelsey. We've been working on parallel development for Linux and NT so that the algorithms are matching. The Yarrow 160A algorithm is a variant of Yarrow-160 that has come about from discussions with John Kelsey. We've been in contact with him throughout our development effort.
I'm not a big fan of Yarrow, since it (in my opinion) places too much faith in the crypto algorithms. It uses a pathetically small entropy pool, and assumes that hash function will do the rest. Which is fine, but that makes it a pseudo-RNG, or a crypto-RNG, and not really an entropy collector. That's not necessarily a bad thing, but IMO that should live outside the kernel. There's no real point putting something like Yarrow in kernel space. Better that it collects entropy from a true entropy collector that's in-kernel (like the current /dev/random), and that it does the rest of the processing in user space. After all, the main rason why /dev/random is in the kernel is precisely because that's the most efficient place to collect as much entropy as possible.
If you look at the current 2.4 /dev/random code, you'll see that there already are two entropy pools in use, which is designed to be able to do something like the "catastrpohic reseed" features which Bruce Schneier is so fond of. What's currently there isn't perfect, in that there's still a bit too much leakage between the two pools, but I think that approach is much better than the pure Yarrow approach.
- Ted - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.kernel.org Please read the FAQ at http://www.tux.org/lkml/
|  |