lkml.org 
[lkml]   [1997]   [Oct]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: monitoring entropy

On Wed, 15 Oct 1997, Malcolm Beattie wrote:

> As I understand it, the difference between data coming from /dev/random
> and /dev/urandom is analogous to the difference between (1) generating
> initial PGP keys one after another, each with perfect randomness and (2)
> generating a cryptographically strongly random stream of numbers from a
> random initialisation vector by repeatedly applying a cryptographic hash.

curious. is it proven that 'inverting the hash from 4096 bits of output'
is equivalent (complexity-wise) to 'inverting the hash from N*4096 bits of
output' ?

lets just take the simplest and most powerful crypto system:

4096 bits of truly random data: R
4096 bits of to-be-encrypted text: T

both sides know 'R', communication is monitored by attackers. Lets take R
XOR T, send it from A to B. This is safe.

Now, lets suppose T1,T2,T3 is sent from A to B, with the same R. It can be
shows that R looses as much from its pure randomness, as much there is
correlation between T1, T2 and T3. If T1, T2, T3 are independent, R will
stay externally random as well. But thats never true for _any_ type of T,
unless T is random data as well. (which is possible but not a useful thing
as message ;).



Now, hashes clearly more complex than a XOR, but is it proved that it
cannot be inverted, even when we pass it _known_ text?

lets call this phenomemon 'key aging'. Any key will age, since the
information passed is never trully random, thus it can be predicted, and
the key can be recovered if enough information is passed.

-- mingo


\
 
 \ /
  Last update: 2005-03-22 13:40    [W:0.034 / U:0.328 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site