Messages in this thread | | | Subject | Re: Reading large amounts from /dev/urandom broken | From | Hannes Frederic Sowa <> | Date | Wed, 23 Jul 2014 17:19:38 +0200 |
| |
On Mi, 2014-07-23 at 11:14 -0400, Theodore Ts'o wrote: > On Wed, Jul 23, 2014 at 04:52:21PM +0300, Andrey Utkin wrote: > > Dear developers, please check bugzilla ticket > > https://bugzilla.kernel.org/show_bug.cgi?id=80981 (not the initial > > issue, but starting with comment#3. > > > > Reading from /dev/urandom gives EOF after 33554431 bytes. I believe > > it is introduced by commit 79a8468747c5f95ed3d5ce8376a3e82e0c5857fc, > > with the chunk > > > > nbytes = min_t(size_t, nbytes, INT_MAX >> (ENTROPY_SHIFT + 3)); > > > > which is described in commit message as "additional paranoia check to > > prevent overly large count values to be passed into urandom_read()". > > > > I don't know why people pull such large amounts of data from urandom, > > but given today there are two bugreports regarding problems doing > > that, i consider that this is practiced. > > I've inquired on the bugzilla why the reporter is abusing urandom in > this way. The other commenter on the bug replicated the problem, but > that's not a "second bug report" in my book. > > At the very least, this will probably cause me to insert a warning > printk: "insane user of /dev/urandom: [current->comm] requested %d > bytes" whenever someone tries to request more than 4k.
Ok, I would be fine with that.
The dd if=/dev/urandom of=random_file.dat seems reasonable to me to try to not break it. But, of course, there are other possibilities.
Bye, Hannes
| |