Messages in this thread | | | Date | Tue, 29 Feb 2000 09:43:41 -0500 (EST) | From | "Mike A. Harris" <> | Subject | Re: very large directories |
| |
On 29 Feb 2000 nbecker@fred.net wrote:
>A coworker has an application that produces 10's of thousands of small >files. It seems the performance of even a simple operation, such as >'rm -rf' or find | xargs rm is very very slow. > >What are the mechanisms that limit performance? Is it the type of >data structure used to represent directories?
Every time a file needs to be opened, the dir is opened and the filename must be searched for. With 10000 or more files in a single dir, that takes time to scan all dir entries, and likely wipes out the directory cache, or at least mucks with it.
Anyone concerned with file access performance should never put that many files in a single dir. Either amalgamate the files into one file, or several files, and change the file access logic in the app, or else create subdirectories and divide the files into different dirs.
Hope this helps. TTYL
-- Mike A. Harris Linux advocate Computer Consultant GNU advocate Capslock Consulting Open Source advocate
Suspicious Anagram #4: Word: PRESIDENT CLINTON OF THE USA Anagram: TO COPULATE HE FINDS INTERNS
- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@vger.rutgers.edu Please read the FAQ at http://www.tux.org/lkml/
| |