lkml.org 
[lkml]   [2014]   [Jul]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: Reading large amounts from /dev/urandom broken
From
Date
On Mi, 2014-07-23 at 11:14 -0400, Theodore Ts'o wrote:
> On Wed, Jul 23, 2014 at 04:52:21PM +0300, Andrey Utkin wrote:
> > Dear developers, please check bugzilla ticket
> > https://bugzilla.kernel.org/show_bug.cgi?id=80981 (not the initial
> > issue, but starting with comment#3.
> >
> > Reading from /dev/urandom gives EOF after 33554431 bytes. I believe
> > it is introduced by commit 79a8468747c5f95ed3d5ce8376a3e82e0c5857fc,
> > with the chunk
> >
> > nbytes = min_t(size_t, nbytes, INT_MAX >> (ENTROPY_SHIFT + 3));
> >
> > which is described in commit message as "additional paranoia check to
> > prevent overly large count values to be passed into urandom_read()".
> >
> > I don't know why people pull such large amounts of data from urandom,
> > but given today there are two bugreports regarding problems doing
> > that, i consider that this is practiced.
>
> I've inquired on the bugzilla why the reporter is abusing urandom in
> this way. The other commenter on the bug replicated the problem, but
> that's not a "second bug report" in my book.
>
> At the very least, this will probably cause me to insert a warning
> printk: "insane user of /dev/urandom: [current->comm] requested %d
> bytes" whenever someone tries to request more than 4k.

Ok, I would be fine with that.

The dd if=/dev/urandom of=random_file.dat seems reasonable to me to try
to not break it. But, of course, there are other possibilities.

Bye,
Hannes



\
 
 \ /
  Last update: 2014-07-23 18:01    [W:0.146 / U:0.228 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site