lkml.org 
[lkml]   [1999]   [Mar]   [30]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: multiply files in one (was GNU/Linux stance by Richard Stallman)
Date
From
: I think any dramatic performance increase will rely on massive
: read-ahead, such that it doesn't matter very much whether you use
: tarfs, reiserfs or ext2fs.
:
: Hm. OK, maybe the problem is that you're thinking about typically
: small files. Unfortunately I can't find your original message with the
: histogram in my archives. For my /usr/bin, the median file size is
: 9216 bytes, which, IIRC, is larger than the median you measured.

No it isn't - that's almost exactly the same size median 6 years after
I gathered my data. Nice to know that things don't change much.

: Where the median file size is several times smaller than the block
: size, the dense packing of tarfs is clearly a winner.

This is the problem. I don't know how many times I have to make this
point: I could care less about space savings or wastage. The point is
to avoid seeks, not to save space. Space is essentially free. Seeks
are not. And if you disagree, then make the darn file system so you
can twist a knob and say that you care about space more than time.

: - do you see a benefit of tarfs over reiserfs

First of all, I use "tar" to mean "a wad of files all jammed together,
possibly with gaps". Thinking tar == tar(1) is going to lead you into
details that I didn't mean. You caould use ar or gdbm or whatever,
all I care is that one I/O gets you the whole wad.

Anyway, if my understanding of reiserfs is correct, yes, I sure do see
a benefit. The idea of making one I/O work by putting the data in the
directory entries doesn't work when you limit the amount of data because
there is no nice natural size where you would draw the line.

If resierfs could do stuff like "hmm, all the files in this directory
put together are < 5MB total, I'll tar 'em up", then that's what I want.

If resierfs was really, realy cool, it would have multiple tar files
per directory - one tar file per uid.

: - a modest read-ahead (hundreds of kBytes) of the inode blocks will
: halve the number of seeks, and will make the remaining seeks
: generally less expensive (no need to seek between two different
: areas on the disc)

Bzzt. One I/O for the whole mess. Even if seeks were free, rotational
delays are not and I/O transactions are not. I want one I/O for the
whole directory.

: - combining this with massive read-ahead (MBytes and more) would yield
: very good performance for ext2fs as well as tarfs and reiserfs

No, it wouldn't. It would only do so if all the files in a single
directory were adjacent to each other. Which they typically aren't.
Many file systems will make files contiguous, but very few (waffle
is the only one I know of) will make files contguous to each other
because they are in the same directory and written by the same user.

Remember, this has to work when you are untarring your crud and I am
untarring my crud and we are both allocating from the bitmap/free list/
whatever at the same time. Waffle actually gets this right, or so
I was told by Hitz when I asked.

: - discussions about the advantages of packing data can be equally well
: addressed with sufficient read-ahead.

No they bloody can't. Not unless the allocation policy is smart enough to
sort things out while writing the data.

: I have a suspicion that we may actually agree (at least on some
: points), but we've been talking around each other.

That's simply not possible, I don't agree with anyone :-)

Sigh. I need a vacation.

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@vger.rutgers.edu
Please read the FAQ at http://www.tux.org/lkml/

\
 
 \ /
  Last update: 2005-03-22 13:51    [W:0.156 / U:0.168 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site