Messages in this thread | | | Date | Tue, 25 Nov 1997 14:21:50 +0100 (MET) | From | Ingo Molnar <> | Subject | Re: fork: out of memory |
| |
On Tue, 25 Nov 1997, Alan Cox wrote:
> > Maybe it would be a wise idea to make few pointers instead of > > fd[NR_OPEN]. Every pointer would point to a smaller table of let's say > > 64 file descriptors and would be allocated as needed. First such table > > would be in files_struct itself.
[...]
> and to allocate initially on a 64 fd break point. So you malloc > one files_struct + 64 * (struct file *). That does however requre > you write the code atomically and safely handle growing the file table > - which is actually quite hard if you want speed.
at the point where we notice that there are no more free fds, we have lost anyway, performance-wise: we've just scanned the 'allocated files' bitmap at least once.
what about using a global (filestruct:fd) hash table to index files, a per-filestruct ringlist to fast-zap files, and a 'close-on-exec' ringlist.
we dont have a linear socket table either, so why not get rid of the direct indexing altogether.
-- mingo
| |