lkml.org 
[lkml]   [2006]   [May]   [6]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 7/14] random: Remove SA_SAMPLE_RANDOM from network drivers
On Sat, May 06, 2006 at 02:05:51PM -0400, Theodore Tso wrote:
> On Sat, May 06, 2006 at 11:48:08AM -0500, Matt Mackall wrote:
> > Case 3:
> > Hash function broken, entropy accounting is over-optimistic:
> > /dev/urandom and /dev/random are both equally insecure because both
> > are revealing more internal entropy than they're collecting. So you
> > should just use /dev/urandom because at least it doesn't block.
> >
> > Putting aside all the practical issue of what exactly is entropy and
> > what decent entropy sources are, we should be able to agree on this
> > much. And this basically says that if you do your entropy accounting
> > badly, you throw the baby out with the bathwater.
>
> Agreed, but I'd an additional point of nuance; this assumes that the
> attacker (call him Boris for the sake of argument) can actually gain
> access to enough /dev/random or /dev/urandom outputs, and be
> knowledgable about all other calls to /dev/random and exactly when
> they happen (since entropy extractions cause the TSC to be mixed into
> the pool) so Boris can can actually determine the contents of the
> pool.

Yes, that's assumed. Because otherwise /dev/urandom would be
sufficient in all cases.

> Note that simply "breaking" a cryptographic hash, in the sense
> of finding two input values that collide to the same output value,
> does not mean that the hash has been sufficiently analyzed that it
> would be possible to accomplish this feat.

I'm not talking about any existing attacks, I'm talking about what
would theoretically be possible were a first preimage attack on our
hash to become practical.

> Does this mean we should __depend__ on this? No, we should always do
> the best job that we can. But it's also fair to say that even if the
> hash function is "broken", that the results are not automatically
> going to be catastrophic. If the attacker can't get their hands on
> enough of an output stream from /dev/random, then it's not likely to
> do much. For an attacker who only has network access, this could be
> quite difficult.

All agreed. But that applies equally to /dev/urandom. The only thing that
distinguishes the two is entropy accounting and entropy accounting
only makes a difference if it's conservative.

I am _not_ arguing that any of this is practical. I'm arguing that
it's completely beside the point.

> > But network traffic should be _assumed_ to be observable to some
> > degree. Everything else in network security makes the assumption that
> > traffic is completely snoopable. By contrast, we assume people can't
> > see us type our passwords[1]. So while our entropy estimator assumes
> > observability == 0, for the network case, 0 < observability <= 1. And
> > if it's greater than what our entropy estimator assumes, our entropy
> > estimates are now too optimistic and /dev/random security degrades to
> > that of /dev/urandom.
>
> The timing of network arrivals is observable to *some* degree. Of
> course, so is the timing from block I/O interrupts.

That's a whole 'nother topic we can tackle separately.

> For network traffic, again, it depends on your threat model.

Here's a threat model: I remotely break into your LAN and 0wn insecure
Windows box X that's one switch jump away from the web server Y that
processes your credit card transactions. I use standard techniques to
kick the switch into broadcast mode so I can see all its traffic or
maybe I break into the switch itself and temporarily isolate Y from
the rest of the world so that only X can talk to it. I'm still in
Russia, but I might as well be in the same room.

> That's why I think it should be configurable. If you don't have a
> real hardware number generator, maybe blocking would be preferable to
> not having good entropy. But in other circumstances, what people need
> is the best possible randomness they can get, and their security is
> not enhanced by simply taking it away from them altogether. That
> makes about as much sense as GNOME making its applications "easier to
> use" by removing functionality (to quote Linus).

Again, I think it's perfectly reasonable to sample from all sorts of
sources. All my issues are about the entropy accounting.

> > Yes, this is all strictly theoretical. But the usefulness of
> > /dev/random is exactly as theoretical. So if you use /dev/random for
> > it's theoretical advantages (and why else would you?), this defeats
> > that.
>
> This becomes a philosophical arugment. Yes, we should strive for as
> much theoretical perfection as possible. But at the same time, we
> need to live in the real world, and adding network entropy which can
> defeat the bored high school student in Russia using some black hat
> toolkit they downloaded of the internet is useful --- even if it can't
> defeat the NSA/FBI agent who can perform a black bag job and place a
> monitoring device on your internal ethernet segment.

No thoughts on scaling back our entropy estimates?

--
Mathematics is the supreme nostalgia of our time.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2006-05-06 22:40    [W:0.695 / U:0.004 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site