lkml.org 
[lkml]   [1996]   [Jul]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRe: Is any file system on Linux appropriate for very large directories?
Date
Hi,

> that uses some kind of hashing for name lookup! A quick review of the
> file systems currently available on Linux suggests that the only one
> that uses hashing is the Amiga file system. I don't mean to be
> prejudiced, but it's hard to imagine that the Amiga FS is the going to
> be the best choice for us.

Amiga's FFS performs very bad for large directories. The hashing used
effectivly divides the linear list of directories into 76 lists of
each one 1/76 of the original size. All data in for one file is in
one diskblock, so for a directory with 10000 entries 10000/(76*2) = 657.5
(if memory serves me right - it's a long time that I last hacked on
Amiga filesystems) readaccesses are necessary in average. As you see this
is O(n) which is truely bad. Apparently the design was made with floppy
disks in mind - small directories have very fast access. There are newer
variants of the Amiga FFS which perform better by using directory caches.
These are a bit slower for file creation, much faster other directory
operations and need some percent more diskspace for the directory cache.

I think Steven Tweedie has some plans to speed up ext2fs for large dir-
ectories using hashing, so stay tuned.

Ralf


\
 
 \ /
  Last update: 2005-03-22 13:38    [W:0.126 / U:1.032 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site