Messages in this thread | | | Date | Mon, 20 May 1996 13:16:44 -0400 | From | "Theodore Ts'o" <> | Subject | Re: CONFIG_RANDOM (compromise?) |
| |
Date: Mon, 20 May 1996 12:00:27 +0200 From: Harald Anlauf <anlauf@crunch.ikp.physik.th-darmstadt.de>
[This mail is not CC'ed to the linux-kernel list]
Actually, yes it was....
If somebody (else) runs a process constantly sucking numbers from /dev/random on an (maybe your) essentially "idle" machine, i.e. with little activity on keyboard, disk, network, etc., can you still guarantee that _you_ still get sufficiently good random numbers from /dev/random, to prevent any attacks, even if this "somebody else" communicates these numbers to an assumed attacker? (Do not assume that you can use e.g. the Pentium time stamp register).
If you have a "bad guy" running on your machine, they can constantly suck numbers from /dev/random. This will cause a "denial of service attack", since /dev/random will only issue random numbers if sufficient entropy is available to generate them.
So an application which uses /dev/random can block; if the application does not want to block, it can open /dev/random in non-blocking mode (usually recommended). However, this does not answer the question of what to do when /dev/random has been exhausted. The right course of action is probably to give the user a warning message and exit.
While this may not sounds entirely satisfactory, consider what else an attacker to could if they have access to your machine. (a) they could try breaking in as root, (b) they could do resource starvation by running a program which does:
while (1) { cp = malloc(1 megabyte); touch_all_memory(cp); fork(); }
(c) they could break in using some neglected hole in (pick your choice of) sendmail, /proc, NIS, NFS, etc., etc., etc.
In the long run, a system which is doing fair-resource allocation to prevent one user from grabbing all availble CPU, virtual memory, and other resources will also have to treat /dev/random as a valuable resource whose use must be controlled to prevent one user from grabbing all available entropy. However, this sort of resource control is hard to do right; especially if you want an efficient system! Given that we don't even handle memory exhaustion terribly efficiently at the moment, random number exhaustion is a similar (unsolved) problem in Linux.
When we solve the general resource allocation problem, it should not be terribly difficult to extend it to solve the /dev/random allocation problem. Why hasn't it been addressed in Linux so far? I suspect because there aren't that many Linux systems doing serious time-sharing systems. We have machines which are network services, and single-user desktop machines, but for those machines things like quotas and resource allocation aren't as important. While there are some time-sharing machiens using Linux, they tend to be in the minority.
- Ted
| |