lkml.org 
[lkml]   [2014]   [Jul]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH, RFC] random: introduce getrandom(2) system call
> In the end people would just recall getentropy in a loop and fetch 256
> bytes each time. I don't think the artificial limit does make any sense.
> I agree that this allows a potential misuse of the interface, but
> doesn't a warning in dmesg suffice?

It makes their code not work, so they can are forced to think about
fixing it before adding the obvious workaround.

> It also makes it easier to port applications from open("/dev/*random"),
> read(...) to getentropy() by reusing the same limits.

But such an application *is broken*. Making it easier to port is
an anti-goal. The goal is to make it enough of a hassle that
people will *fix* their code.

There's a *reason* that the /dev/random man page explicitly tells
people not to trust software that reads more than 32 bytes at a time
from /dev/random:

> While some safety margin above that minimum is reasonable, as a guard
> against flaws in the CPRNG algorithm, no cryptographic primitive avail-
> able today can hope to promise more than 256 bits of security, so if
> any program reads more than 256 bits (32 bytes) from the kernel random
> pool per invocation, or per reasonable reseed interval (not less than
> one minute), that should be taken as a sign that its cryptography is
> *not* skillfully implemented.

("not skilfuly implemented" was the phrase chosen after some discussion to
convey "either a quick hack or something you dhouldn't trust.")

To expand on what I said in my mail to Ted, 256 is too high.
I'd go with OpenBSD's 128 bytes or even drop it to 64.


\
 
 \ /
  Last update: 2014-07-20 19:41    [W:0.632 / U:0.148 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site