lkml.org 
[lkml]   [1996]   [Jul]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: Is any file system on Linux appropriate for very large directories?


On Fri, 12 Jul 1996, Eric Benson wrote:
>
> We have an application here that uses lots of files in a single
> directory. At the time it was set up, it didn't seem to be a problem.
> However, due to Amazon.com's 30 percent per month growth rate, this is
> now getting to be a serious problem due to the time (and kernel lockup)
> required for linear searching of directories. (By the way, this
> application is currently running on Suns, not on Linux, but moving it to
> Linux is an option we are considering.)

Ok, may I just suggest you accept the fact that large directories are
going to result in slower lookups, and try to overcome that using some
simple change to your setup?

Now, I admit that using a hashed directory lookup strategy (or even just
sorted directories and binary searches or whatever) is a reasonable thing
to do, but on the other hand I don't feel it is necessarily the _right_
thing to do. I don't think the directory structure of a filesystem is
necessarily meant to be a database on any larger scale, and on a smaller
scale there are problems with the "faster" lookup strategies (more
complexity, more overhead for small directories).

> The "right" solution to this
> problem is to reimplement our application using a "real" database, but
> it is possible that it could be solved simply by using a file system
> that uses some kind of hashing for name lookup!

The best (in my opinion) way to do the hashing is actually to do it at
user level. It can often be trivial, especially if your "database" has
simple rules governing the filenames. The obvious approach is to use the
tree-like structure of the directory to good advantage. That actually
gives you a kind of "binary lookup" but done right you can actually do it
with a base other than 2, and get even _better_ performance.

The obvious examples of this are home directories or even just the
terminfo "database". Instead of having one directory with lots of files:

aardvark
boa
cat
..
zebra

you have a directory structure with

a/aardvark
a/..
b/boa
...
z/zebra

and you can expand that to any number of levels you like (and you
obviously don't have to do it alphabetically: you can trivially hash the
lookup any way you want to that suits your particular file distribution).
The changes for any code doing the lookups is usually pretty trivial, and
it scales a lot better than having just one flat directory structure.

There are other advantages to using sub-directories too: it's a lot
easier expanding the database to cover multiple disks using symlinks etc.
And the _really_ nice part about this kind of hashing is that because
it's done at user level, you can make the hash suit the _application_,
rather than trying to have some generic hash inside the filesystem that
would have to suit _everything_.

Now, the obvious downside is that you would have to change your
application and re-order your current database, but that can often be
trivial (if you do it alphabetically like above, you can write a trivial
shell-script to create the new directory lay-out, and changing the
application to use that is not likely to be a problem either).

Another nice thing about using filesystem subdirectories this way is that
it's portable. It works on just about anything, ranging from DOS/Win/NT
to every UNIX out there and stuff like VMS etc, and you don't have to
worry about how the OS does lookups. (Well, you have to assume that the
OS supports subdirectories, and that rules out DOS 1.0, but I don't think
that is likely to be a real portability problem ;-)

Linus


\
 
 \ /
  Last update: 2005-03-22 13:38    [W:1.356 / U:0.012 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site