lkml.org 
[lkml]   [2000]   [Feb]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: very large directories
On 29 Feb 2000 nbecker@fred.net wrote:

>A coworker has an application that produces 10's of thousands of small
>files. It seems the performance of even a simple operation, such as
>'rm -rf' or find | xargs rm is very very slow.
>
>What are the mechanisms that limit performance? Is it the type of
>data structure used to represent directories?

Every time a file needs to be opened, the dir is opened and the
filename must be searched for. With 10000 or more files in a
single dir, that takes time to scan all dir entries, and likely
wipes out the directory cache, or at least mucks with it.

Anyone concerned with file access performance should never put
that many files in a single dir. Either amalgamate the files
into one file, or several files, and change the file access
logic in the app, or else create subdirectories and divide the
files into different dirs.

Hope this helps.
TTYL

--
Mike A. Harris Linux advocate
Computer Consultant GNU advocate
Capslock Consulting Open Source advocate

Suspicious Anagram #4:
Word: PRESIDENT CLINTON OF THE USA
Anagram: TO COPULATE HE FINDS INTERNS


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:56    [W:0.104 / U:0.080 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site