Messages in this thread |  | | Date | Fri, 3 Jan 1997 11:51:31 -0500 (EST) | From | "Richard B. Johnson" <> | Subject | Re: How to increat [sic.] max open files? |
| |
[SNIP] > > I run into the 256 fd limit all the time. I've written several > multi-threaded servers that are *designed* to handle several thousand > connections from several thousand clients at once. To test how many > connections these servers can handle, I need a process to *open* several > thousand connections to it. > > Eric > The number 256 does certainly look like a "made up" number, as abitrary as they come. But, my point is that if the number was 1024 or 1.84e19, the number would be "wrong". I have a server that handles all the source-code access in the whole company. When it runs out of resources, it makes another process, etc. The limit becomes the amount or RAM and or page-file. I didn't write the code so I am not blowing my own horn. The code was written by someone who was used to the limited resources of a VAX, then ported to Suns. I ported it to Linux by simply compiling and fixing a few "includes".
The problems with limits, whether artifical or not, is that they always exist. In 1967, there was the first need to "sort" a telephone directory on a machine (IBM 360) with 4k or real core. A procedure was designed, called the "Chicago Sort" (it was the Chicago telephone database). The designer knew that you only had to have two strings in memory at any one time, therefore RAM was not a problem.
If someone in the nineties was given the task, they would state; "It can't be done because...". And they would claim that the entire database would have to be present in RAM all at once, etc. This is because we have gotten used to "unlimited" resources and many have written code that needs such an environment. A bit more design before writing, would not only remove potential problems, but probably result in a lot better performance of many shared-resource applications such as servers.
In the network example above, it is possible, with very little extra code, to "fail-over" to another machine if all the resources of the original machine are becoming exhausted. This is entirely transparent to the client being served.
But, you have to test an existing server that was not completely designed before it was implemented. Note I am not implying anything bad, only making an observation. I see on the "net" some changes are being made to the system calls that crash when you modify the header files to use more FDs. My guess is that the artificial limit will be raised, then raised, then raised, etc.....
Cheers, Dick Johnson -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- Richard B. Johnson Project Engineer Analogic Corporation Voice : (508) 977-3000 ext. 3754 Fax : (508) 532-6097 Modem : (508) 977-6870 Ftp : ftp@boneserver.analogic.com Email : rjohnson@analogic.com, johnson@analogic.com Penguin : Linux version 2.1.20 on an i586 machine (66.15 BogoMips). Warning : It's hard to remain at the trailing edge of technology. -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
|  |