Problem with Tar file
Hi All, at present I have in my server installed SuSe 7.0 and I have a Tar file that I need to send to another server by ftp. The file has a size of 2.3GB, and when I try to send, it shows the following error o message: not a plain file (ftp). I have tried to compress the file with a gzip -9 and it shows the following error o messages: Value too large for defined data type. What do you suggest, I should do? Is there a limit on the capacity of files that can be sent as Tar by ftp? Is there a limit on the degree to which such a file can be compressed using gzip? Thanks for your help LJ
Le Mercredi 1 Août 2001 18:04, Luis José Fabbiani B. (SOPORTE) a écrit : The file has a
size of 2.3GB,
there is a limit on file size a linux kernel can handle. I think it was 1 or 2 Gb (an option on compiling the kernel) and now (7.2, kernel 2.4) the kernel names itself "4Gb", I beg it's the file limitation. I wonder how you can have a 2.3 Gb file? (or is that 2300kb?) jdd -- <http://www.dodin.net> <mailto:jdanield@dodin.net> WHO'S THAT GUY ? Help me found it Russia & South america help needed http://www.dodin.net/serge/index.html
On Wed, Aug 01, 2001 at 10:13:44PM +0200, jdd wrote:
size of 2.3GB,
there is a limit on file size a linux kernel can handle. I think it was 1 or 2 Gb (an option on compiling the kernel) The limit was 2GB (32bit) and is now afaik (64bit) -- that's a huge amount more than ever will fit on my drives. (aka large file support)
but i found that the limit is not in the kernel as such but in the file system drivers, i.e. on an ext2 partition i had no problems with ~3.7GB files; reiserfs on the same machine is not able to handle large files (yet) (my suse7.2's reiserfs-3.x.0j-17): no perl, no cat >>, no dd, it doesn't work, they all stop at 2**31 bytes. Luis, your problem: i suspect there may be some programs linked with old code, wich do not support large files, but your tar does. or you created the tar locally, and now you try to access it via nfs? there may be limits too. and there might be a config option in your ftp server that limit transfer volume. but that won't explain that gzip error. if all else fails, try to split it. split -b 100m your.tar some_prefix will produce lots of some_prefix00, 01 etc. which then can be sepparately gzip'ed. dd if=your.tar bs=1M count=100 skip=100 | gzip > your_tar_vol_1.tgz will produce a gzip of the second 100 Meg in your.tar
and now (7.2, kernel 2.4) the kernel names itself "4Gb", I beg it's the file limitation.
no, thats RAM. src/linux/Documentation.help: High Memory support CONFIG_NOHIGHMEM Linux can use up to 64 Gigabytes of physical memory on x86 systems. However, the address space of 32-bit x86 processors is only 4 Gigabytes large. That means that, if you have a large amount of physical memory, not all of it can be "permanently mapped" by the kernel. The physical memory that's not permanently mapped is called "high memory". [...] gruss, lars btw, did you check df (disk full)? ( hey, just asking ;) sorry, first post was private to jdd, so forgive me, you got it twice. --
but i found that the limit is not in the kernel as such but in the file system drivers, i.e. on an ext2 partition i had no problems with ~3.7GB files; reiserfs on the same machine is not able to handle large files (yet) (my suse7.2's reiserfs-3.x.0j-17): no perl, no cat >>, no dd, it doesn't work, they all stop at 2**31 bytes.
Not directly security-related, but important to know: http://www.suse.de/~aj/linux_lfs.html As well I'm curious about how many people will let me know that they're lying at the beach... :-) Roman.
participants (4)
-
jdd
-
lars@newsone.org
-
Luis José Fabbiani B. (SOPORTE)
-
Roman Drahtmueller