lkml.org 
[lkml]   [2000]   [Sep]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: Using Yarrow in /dev/random
"Theodore Y. Ts'o" wrote:
>
> Date: Tue, 12 Sep 2000 09:56:12 +0000
> From: Pravir Chandra <pchandra@rstcorp.com>
>
> i agree that the yarrow generator does place some faith on the crypto
> cipher and the accumulator uses a hash, but current /dev/random
> places faith on a crc and urandom uses a hash.
>
> No, not true. The mixing into the entropy pool uses a twisted LFSR, but
> all outputs from the pool (to either /dev/random or /dev/urandom)
> filters the output through SHA-1 as a whitener. The key here, though,
> and what makes this fundamentally different from yarrow, is that since
> we're feeding the entire (large) entropy pool through SHA-1, even if the
> SHA-1 algorithm is very badly broken (say as in what's been happening
> with MD5), as long as there's sufficient entropy in the pool, the
> adversary can define but minimal information about the pool, since the
> 8192 -> 160 bit transform has to lose information by definition.

It seems clear that /dev/random's twisting and large pool do a far better
job of the entropy collection stage than Yarrow's simple hash and tiny
pool. Methinks the Yarrow developers would argue that their mechanism is
enough, but I'm not convinced of that.

Even if I were, I would see no reason to change that part of /dev/random.
We have a working entropy collector
based on apparently sound design principles
that has been fairly heavily analyzed
that fits cleanly into the kernel
Why even consider replacing it with one something new that is clearly
weaker?

Adding a second stage along Yarrow-ish lines is much more plausible.
That is the area where Yarrow (or my proposed stuff based on Rijndael,
or the second pool added in the 2.4 driver) seems to have some clear
advantages.

> i also agree that the entropy pools are small, but the nature of the
> hash preserves the amount of entropy that has been uses to create the
> state of the pools. basically, if the pool size is 160 bits (hash
> output) its state can be built by more than 160 bits of entropy, its
> just that adding entropy after that increases the unguessability
> (conventional attacks) of the state but brute forcing the state is
> still 2^160.

One problem with that is that both the entropy inputs and the output loads
may be quite bursty, so you need a substantial buffer to smooth things
out. Consider a server that does little all night then gets heavy demand
around 9 AM. With /dev/random's 4K bit pool and a bit of luck, it may
have enough entropy tucked away by morning to cope well. If the design
limits the entropy it can store to 160 bits, then there may be a problem.

> Which is just another way of saying that yarrow is only a cryptographic
> random number generator whose maxmum entropy storage is 160 bits.....

As I see it, /dev/random handles entropy collection better but likely
has things to learn (or already has in 2.4) from the Yarrow design on
the output side.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 12:41    [W:0.035 / U:1.272 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site