Messages in this thread |  | | Subject | Re: How to increat [sic.] max open files? | Date | Thu, 2 Jan 1997 17:02:48 -0500 (EST) | From | "Andrew E. Mileski" <> |
| |
> I think that a task, process, program, etc., that needs more than 100 > file handles is improperly written. Keeping that many files open at any > one time will cause file destruction if the system crashes. On the other > hand, opening/reading/writing/closing files in rapid sucession is not > very efficient. A file-handle limit forces a programmer to think about > this and design (rather than just write), the program.
I agree - but the problem isn't usually files, it is TCP sockets which use file descriptors. The software should multiplex the sockets, but I rarely see software that does this. UDP is an even better solution IMHO, as a single UDP socket can serve any number of clients.
There are two WELL KNOWN problems with increasing the limits too: the kernel stack is too small, and a large amount of non-kernel code needs to be compiled with the new limits. In order to achieve this, there _MUST_ be agreement between _ALL_ distribution packagers, not just developers and Linus.
-- Andrew E. Mileski mailto:aem@ott.hookup.net Linux Plug-and-Play Kernel Project http://www.redhat.com/linux-info/pnp/ XFree86 Matrox Team http://www.bf.rmit.edu.au/~ajv/xf86-matrox.html
|  |