Messages in this thread | | | From | Andrey Borzenkov <> | Subject | Number of open files scalability? | Date | Sat, 20 Jun 2009 13:32:47 +0400 |
| |
Hi,
we have a customer that requires large number of open files. Basically, it is SAP with large Oracle database with relatively large number of concurrent connections from worker processes. Right now the amount permanently opened files is above 128000; with current trends of DB and load growth it could easily rocket up to and above of 1000000.
So the questions are
- is there any per-process or per-user limit for number of open files imposed by kernel (except of course set by rlimits)?
- is there any fs/file-max limit except imposed by data type (int)?
- finally, how scalable is the implementation? Will having one million of open files impose any noticeable slowdown? If yes, what operations are affected? I.e. opening new files/creating new process is not that important; but having to search 1000000 files for every operation would be fatal.
The platform is x86_64, SLES 9 with likely update to SLES10.
Thank you!
-andrey [unhandled content-type:application/pgp-signature] | |