Messages in this thread | | | Date | Fri, 23 Jan 2009 14:02:10 -0800 | From | David Daney <> | Subject | Re: 2.6.28, rlimits, performance and debian etch |
| |
Florian Weimer wrote: > * Peter Palfrader: > >> Turns out that script is forking a lot and something in it or python or >> whereever closes all the file descriptors it doesn't want to pass on. >> That is, it starts at zero and goes up to ulimit -n/RLIMIT_NOFILE and >> closes them all with a few exceptions. >> >> Turns out that takes a long time when your limit -n is now 2^20 (1048576). > > Interesting. > > Can we make /proc more-or-less mandatory, so that the file descriptor > list can be retrieved explicitly?
One problem is that for values of RLIMIT_NOFILE less than something like 4096, it is much faster to call sys_close() on all possible values than iterate through a handful of open files from /proc/self/fd using opendir(3)/readdir(3).
Obviously for some large values of RLIMIT_NOFILE, this is no longer true.
People who have written code based on measuring the difference end up getting screwed when RLIMIT_NOFILE unexpectedly increases.
The real solution is to convert your user space programs to use the new syscalls that allow for race-free setting of close-on-exec. Then you no longer need to mess around with iterating over these things.
David Daney
| |