Messages in this thread | | | Date | Wed, 01 Mar 2000 00:28:49 +0000 | From | Ed Tomlinson <> | Subject | Re: very large directories |
| |
Matti Aarnio wrote: > Hi,
You might want to try reiserfs. One of the tests they do on it is create a directory with many entries (believe up to a million has been tried).
For the latest version see:
ftp://ftp.devlinux.com/pub/namesys/linux-2.2.14-reiserfs-3.5.17-patch.gz
Ed Tomlinson tomlins@cam.org
> On Tue, Feb 29, 2000 at 09:16:52AM -0500, nbecker@fred.net wrote: > > Sorry, I wasn't very clear. I mean that after the program is done > > creating all these files, that any operations on the files are > > extremely slow. At this time the disk activity has stopped. > > > > I suspect the answer is that the data structures used to represent > > directories are not well suited to efficient operations on extremely > > large directories, but I was just curious. > > Indeed they aren't. The more you have filenames in the > directory, the slower the search will be. On overall it > will be in order of O(n^2) where n is number of filenames. > > There has been talk about having B-tree on-disk datastructures > for directory processing. Nothing visible of it yet. > > - > To unsubscribe from this list: send the line "unsubscribe linux-kernel" in > the body of a message to majordomo@vger.rutgers.edu > Please read the FAQ at http://www.tux.org/lkml/
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/
| |