Messages in this thread | | | Date | Tue, 4 Nov 1997 14:46:25 -0700 (MST) | From | teunis <> | Subject | Re: Filesize limitation |
| |
On Tue, 4 Nov 1997, Rogier Wolff wrote:
> Richard B. Johnson wrote: > > > > On Mon, 3 Nov 1997, Andre Uratsuka Manoel wrote: > > > > > > > > Hello gentlemen, > > > > > > I was contact by people asking about filesize limits in Linux as > > > they tried to create a 2GB file, but couldn't. I figured the problem > > > might be the filesize limitation. At least that is what I'd expect to > > > happen with a file that size. Am I right on that? If it is so, is there > > > a 64-bit filesystem for Linux? > > > > > There is plenty of "room" available in the object types that would > > define a file in Linux for an ext2 file-system (like fpos_t, etc.) > > No there isn't.
yes there is - fpos_t is moving towards 64bit I think....
> > I don't have a spare disk large enough to make a 2Gb file. However, > > You don't need to. Make a file with a large hole in it. > > > I would guess that if you can't make one (with large enough media), there > > is either a bug in the program that tries to create it or possibly a > > bug in the kernel. You don't need 64 bits to manipulate a 2 Gb file. > > Nope. The minix filesystem limits you to 256Mb of file. > The ext2fs limits you to 2G files.
No - it allows 64bit files :) ... providing the kernel can handle it though... [afaik, but backed up by a thread a couple of months ago]
> Try the following: > > # create a file "testfile", seek to 2G-2Mb, write one Mb of zeroes. > dd if=/dev/zero of=testfile seek=2046 bs=1024k count=1 > # OK. > > # Append some kilobytes of zeroes, until we hit the limit. > dd if=/dev/zero bs=1k >> testfile > # File too large after 1023 blocks output.
dd (or bash in this case) doesn't (yet) handle the large file access sysctl's.... (llseek and friends :)
and as an "append to file" requires a lseek to the end of the file... [incidentally, I can't remember why, but write() has this problem too]
> # See how much disk space this uses: > ls -ls testfile > # 2058 blocks on my system. Around 2Mb. Each of the two dd commands > # generated an allocation of about 1Mb. > > Someone sent me a short DOS program to analyse an image today. My > reaction was: I wouldn't write a program for that I should have a tool > already on my system to do that. So I did. Same here. Enough tools > available on a standard Linux box to experiment with from the command > line.....
According to the thread I followed: 1. ext2fs is happy with 64bit limits 2. Linux/Alpha doesn't have a problem with this IIRC 3. The 64bit sysctls are still being built..... (as a hint: they appeared in 2.1.60....) 4. glibc-2.1 will be the first system to support them (already do)
Now any and all of these could be wrong... But one clue: This thread happened a month or three ago....
G'day, eh? :) - Teunis
PS: glibc-2.1 isn't exactly common (yet). Most tools aren't linked to that. Ergo most tools can't handle >2G files. Yet. (though I suppose one could write one's own hooks into the large filesystem support... :)
| |