Messages in this thread |  | | Date | Thu, 2 Jan 1997 13:38:36 -0500 (EST) | From | "Richard B. Johnson" <> | Subject | Re: How to increat [sic.] max open files? |
| |
On Thu, 2 Jan 1997, James L. McGill wrote:
> On Thu, 2 Jan 1997, Marko Sepp wrote: > > > >I am trying to increase the maximum number of open files > > >(currently 256). I use Linux 2.0.0 (slackware 96). [SNIP] > > > Er, NO. With as much attention as this issue has had in recent months, > I am quite surprised that the kernel and libc code have not adopted increased > filehandle support. There are still people saying that "256 filehandles > should be enough for anyone." Isn't that attitude phiolosophically flawed, > especially in the face of the people who do need e.g. this scaling factor? [SNIP] > > We await a canonical solution to the "File Descriptor Max" problem. > We would like to see "no limit", but a high limit would be welcome. > The best I have managed to do so far is to say that this appears to > work, given that I recompile the software listed above. > [PATCH SNIPPED]
There needs to be "someplace" to put information about every file descriptor that might be used when a process is created. In other words, each process "table" is of fixed length. It therefore seems that each process can only have a certain number of file descriptors. If you make space available for more than "normal" (whatever that is), you waste valuable RAM. If we were not stuck with "FILE *file" stuff in 'C', the maximum number of FD's could be limited only by the largest positive number that can be described by an "int" on the platform in use.
This is not true of all operating systems. Some operating systems make artificial limits, but the physical limit is ONLY the maximum number that can be stored in an "int". Anyway, your "unlimited" file-handles isn't possible. Even a long int or a quadword, etc., have limits. This presumes that such a handle won't be used as an index into a fixed-length table of some sort.
Therefore, the natural question is:
Now many files SHOULD a process access? If you state "all it wants", then you need some other kind of operating system. If you state 10,000, I can show a good reason why you will need 10,001. In other words, there MUST be some kind of limit.
I think that a task, process, program, etc., that needs more than 100 file handles is improperly written. Keeping that many files open at any one time will cause file destruction if the system crashes. On the other hand, opening/reading/writing/closing files in rapid sucession is not very efficient. A file-handle limit forces a programmer to think about this and design (rather than just write), the program.
Lets say that you have a "mount daemon" that is going to perform NFS file system access for thousands of clients on the network. I think another damon should be created if the first runs out of file-handles. Each time a daemon's resource capability is exceeded, another is created.
Each time a daemon closes its last file, it expires. Now, you have 100 daemons when you need them and 1 daemon when you only need it.
The same is true of database programs, etc. There must be some kind of discipline enforced by the operating system or you name it chaos.
Cheers, Dick Johnson -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- Richard B. Johnson Project Engineer Analogic Corporation Voice : (508) 977-3000 ext. 3754 Fax : (508) 532-6097 Modem : (508) 977-6870 Ftp : ftp@boneserver.analogic.com Email : rjohnson@analogic.com, johnson@analogic.com Penguin : Linux version 2.1.18 on an i586 machine (66.15 BogoMips). Warning : It's hard to remain at the trailing edge of technology. -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|  |