On Wed, Aug 01, 2001 at 10:13:44PM +0200, jdd wrote:
size of 2.3GB,
there is a limit on file size a linux kernel can handle. I think it was 1 or 2 Gb (an option on compiling the kernel) The limit was 2GB (32bit) and is now afaik (64bit) -- that's a huge amount more than ever will fit on my drives. (aka large file support)
but i found that the limit is not in the kernel as such but in the file system drivers, i.e. on an ext2 partition i had no problems with ~3.7GB files; reiserfs on the same machine is not able to handle large files (yet) (my suse7.2's reiserfs-3.x.0j-17): no perl, no cat >>, no dd, it doesn't work, they all stop at 2**31 bytes. Luis, your problem: i suspect there may be some programs linked with old code, wich do not support large files, but your tar does. or you created the tar locally, and now you try to access it via nfs? there may be limits too. and there might be a config option in your ftp server that limit transfer volume. but that won't explain that gzip error. if all else fails, try to split it. split -b 100m your.tar some_prefix will produce lots of some_prefix00, 01 etc. which then can be sepparately gzip'ed. dd if=your.tar bs=1M count=100 skip=100 | gzip > your_tar_vol_1.tgz will produce a gzip of the second 100 Meg in your.tar
and now (7.2, kernel 2.4) the kernel names itself "4Gb", I beg it's the file limitation.
no, thats RAM. src/linux/Documentation.help: High Memory support CONFIG_NOHIGHMEM Linux can use up to 64 Gigabytes of physical memory on x86 systems. However, the address space of 32-bit x86 processors is only 4 Gigabytes large. That means that, if you have a large amount of physical memory, not all of it can be "permanently mapped" by the kernel. The physical memory that's not permanently mapped is called "high memory". [...] gruss, lars btw, did you check df (disk full)? ( hey, just asking ;) sorry, first post was private to jdd, so forgive me, you got it twice. --