[Bug 230254] New: cannot write files > 1GB into an udf filesystem
https://bugzilla.novell.com/show_bug.cgi?id=230254 Summary: cannot write files > 1GB into an udf filesystem Product: openSUSE 10.3 Version: unspecified Platform: All OS/Version: Linux Status: NEW Severity: Normal Priority: P5 - None Component: Basesystem AssignedTo: bnc-team-screening@forge.provo.novell.com ReportedBy: mfabian@novell.com QAContact: qa@suse.de cannot write files > 1GB into an udf filesystem -- Configure bugmail: https://bugzilla.novell.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are on the CC list for the bug, or are watching someone who is.
https://bugzilla.novell.com/show_bug.cgi?id=230254 mfabian@novell.com changed: What |Removed |Added ---------------------------------------------------------------------------- AssignedTo|bnc-team- |wolfgang.engel@novell.com |screening@forge.provo.novell| |.com | -- Configure bugmail: https://bugzilla.novell.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are on the CC list for the bug, or are watching someone who is.
https://bugzilla.novell.com/show_bug.cgi?id=230254 ------- Comment #1 from mfabian@novell.com 2006-12-21 09:24 MST ------- How to reproduce: I have a 4.4G file which I want to write into an udf filesystem: root@magellan:/sakura/mfabian/test# ls -lh ttt 合計 4.4G -r-xr-xr-x 1 mfabian suse 4.4G 2005-06-11 14:36 ttt.avi* root@magellan:/sakura/mfabian/test# I create a big enough file to use it as a file system image: root@magellan:/sakura/mfabian/test# dd if=/dev/zero of=ttt.image bs=4700 count=1000000 1000000+0 records in 1000000+0 records out 4700000000 bytes (4.7 GB) copied, 132.993 s, 35.3 MB/s root@magellan:/sakura/mfabian/test# Now I format this image with an udf file system: root@magellan:/sakura/mfabian/test# mkudffs ttt.image mkudffs ttt.image start=0, blocks=16, type=RESERVED start=16, blocks=3, type=VRS start=19, blocks=237, type=USPACE start=256, blocks=1, type=ANCHOR start=257, blocks=16, type=PVDS start=273, blocks=1, type=LVID start=274, blocks=2294390, type=PSPACE start=2294664, blocks=1, type=ANCHOR start=2294665, blocks=239, type=USPACE start=2294904, blocks=16, type=RVDS start=2294920, blocks=1, type=ANCHOR root@magellan:/sakura/mfabian/test# Seems to have worked. Loopback mount the image: root@magellan:/sakura/mfabian/test# mount -o loop ttt.image /mnt root@magellan:/sakura/mfabian/test# mount | grep mnt /sakura/mfabian/test/ttt.image on /mnt type udf (rw,loop=/dev/loop0) root@magellan:/sakura/mfabian/test# Seems to have worked as well. Try to write into the UDF file system: root@magellan:/sakura/mfabian/test# echo "今日は世界" > /mnt/ttt root@magellan:/sakura/mfabian/test# root@magellan:/sakura/mfabian/test# ls -l /mnt/ttt -rw-r--r-- 1 root root 16 2006-12-21 17:08 /mnt/ttt root@magellan:/sakura/mfabian/test# cat /mnt/ttt 今日は世界 root@magellan:/sakura/mfabian/test# OK, works. Now try the 4.4GB file: root@magellan:/sakura/mfabian/test# time cp ttt.image /mnt ファイルサイズ制限を超過しました (core dumped) real 1m7.186s user 0m0.044s sys 0m3.008s root@magellan:/sakura/mfabian/test# cp dumps core. It has written only 1GB: root@magellan:/sakura/mfabian/test# ls -l /mnt 合計 1048576 drwxr-xr-x 2 root root 40 2006-12-21 16:04 lost+found/ -rw-r--r-- 1 root root 16 2006-12-21 17:08 ttt -rw-r--r-- 1 root root 1073741824 2006-12-21 17:10 ttt.image root@magellan:/sakura/mfabian/test# Remove the file again and try again with rsync instead of copy: root@magellan:/sakura/mfabian/test# rm /mnt/ttt.image root@magellan:/sakura/mfabian/test# time rsync ttt.image /mnt rsync: writefd_unbuffered failed to write 4 bytes [sender]: Broken pipe (32) rsync: write failed on "/mnt/ttt.image": File too large (27) rsync error: error in file IO (code 11) at receiver.c(253) [receiver=2.6.8] rsync: connection unexpectedly closed (40 bytes received so far) [generator] rsync error: error in rsync protocol data stream (code 12) at io.c(459) [generator=2.6.8] rsync: connection unexpectedly closed (30 bytes received so far) [sender] rsync error: error in rsync protocol data stream (code 12) at io.c(459) [sender=2.6.8] real 1m12.860s user 0m5.328s sys 0m2.768s root@magellan:/sakura/mfabian/test# rsync also fails. When using "rsync -P" one can see that it also fails after having written 1GB. -- Configure bugmail: https://bugzilla.novell.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are on the CC list for the bug, or are watching someone who is.
https://bugzilla.novell.com/show_bug.cgi?id=230254 wolfgang.engel@novell.com changed: What |Removed |Added ---------------------------------------------------------------------------- Status|NEW |ASSIGNED -- Configure bugmail: https://bugzilla.novell.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are on the CC list for the bug, or are watching someone who is.
On Tue, Aug 15, 2006 at 01:56:26PM +0200, Jan Kara wrote:
UDF code is not really ready to handle extents larger that 1GB. This is the easy way to forbid creating those.
Also truncation code did not count with the case when there are no extents in the file and we are extending the file.
Signed-off-by: Jan Kara
Signed-off-by: Greg Kroah-Hartman --- fs/udf/super.c | 2 +- fs/udf/truncate.c | 64 ++++++++++++++++++++++++++++++++--------------------- 2 files changed, 40 insertions(+), 26 deletions(-) diff --git a/fs/udf/super.c b/fs/udf/super.c index 7de172e..fcce1a2 100644 --- a/fs/udf/super.c +++ b/fs/udf/super.c @@ -1659,7 +1659,7 @@ #endif iput(inode); goto error_out; } - sb->s_maxbytes = MAX_LFS_FILESIZE; + sb->s_maxbytes = 1<<30; [... rest of patch skipped ...]
After this change the size of files which can be created on an UDF filesystem becomes limited to 1GB. This is very unfortunate - in particular, it means that there will be no way to write a file larger than 4GB to a DVD under Linux (mkisofs -udf does not support files larger than 4GB, so the typical workaround was to use mkudffs and mount -o loop). In fact, this change may be considered as a regression - large files on UDF seemed to work before (at least in simple cases), and now they are forbidden. Actually I've been trying this and I have not been able to create file larger than 1GB on my computer without UDF corrupting slab or doing some other nasty thing. OK, maybe if you created it in 1GB pieces it could work but anyway the problem is that currently if you have UDF rw-mounted, ordinary user could make UDF corrupt kernel memory... So consider this
https://bugzilla.novell.com/show_bug.cgi?id=230254 mhopf@novell.com changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |jack@novell.com ------- Comment #2 from mhopf@novell.com 2007-01-02 03:53 MST ------- From http://lkml.org/lkml/2006/9/4/64 : Hello, limitation more as a hotfix to the security problem - real fix is to rewrite UDF write path to not create extents larger than 1 GB but that is quite some work and will definitely need more testing.
Files larger than 1GB can be read even after this patch (because s_maxbytes is not checked in read paths, and udf does not use generic_file_lseek()), so old disks at least can be read.
What issues with files larger than 1GB have been found in the code? See above.
Is someone working to fix these problems? Yes, I plan to have a look into a proper fix of this problem (i.e. fix UDF write path).
Honza
--
Jan Kara
https://bugzilla.novell.com/show_bug.cgi?id=230254 mhopf@novell.com changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |nadvornik@novell.com ------- Comment #3 from mhopf@novell.com 2007-01-02 04:09 MST ------- Actually, you would like to make a truly udf based image with mkisofs -udf, so you can burn the data with growisofs on-the-fly. Unfortunately, mkisofs always creates an iso image (with udf extension in this case) and is thus 2GB (or 4GB) size limited. I still create all my DVDs on Windows due to this limitation. I once wanted to implement this in mkisofs but it looked nontrivial to me :-/ Adding the maintainer of mkisofs to CC as well. -- Configure bugmail: https://bugzilla.novell.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are on the CC list for the bug, or are watching someone who is.
https://bugzilla.novell.com/show_bug.cgi?id=230254 ------- Comment #4 from jack@novell.com 2007-01-05 10:22 MST ------- I'm not sure what to write :). Yes, we had to restrict files on UDF to at most 1GB as there are exploitable bugs in UDF driver for larger files. I had no time to come up with a better fix (i.e. rewrite allocation code in UDF). Also Eric Sandeem from RedHat may be working on this so maybe he comes up with a better fix faster than me :). -- Configure bugmail: https://bugzilla.novell.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are on the CC list for the bug, or are watching someone who is.
https://bugzilla.novell.com/show_bug.cgi?id=230254 ------- Comment #5 from mhopf@novell.com 2007-01-05 11:13 MST ------- Understandable :) So you're all aware of the issue and working on a solution ;) Has anybody any ideas whether creating udf-only images with mkisofs is something that is being worked on, or has nobody dared to do so ATM? -- Configure bugmail: https://bugzilla.novell.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are on the CC list for the bug, or are watching someone who is.
https://bugzilla.novell.com/show_bug.cgi?id=230254 sndirsch@novell.com changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |eich@novell.com, sndirsch@novell.com ------- Comment #6 from sndirsch@novell.com 2007-05-12 04:28 MST ------- Egbert, JFYI. Since Matthias or me is in Cc of this bugreport or the reported itself, it might be interesting for you as well. -- Configure bugmail: https://bugzilla.novell.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are on the CC list for the bug, or are watching someone who is.
https://bugzilla.novell.com/show_bug.cgi?id=230254 ------- Comment #7 from nadvornik@novell.com 2007-05-14 02:56 MST ------- new genisoimage in STABLE has this option: -allow-limited-size When processing files larger than 2GiB which cannot be easily represented in ISO9660, add them with a shrunk visible file size to ISO9660 and with the correct visible file size to the UDF system. The result is an inconsistent filesystem and users need to make sure that they really use UDF rather than ISO9660 driver to read a such disk. Implies enabling -udf. -- Configure bugmail: https://bugzilla.novell.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are on the CC list for the bug, or are watching someone who is.
https://bugzilla.novell.com/show_bug.cgi?id=230254 ------- Comment #8 from mhopf@novell.com 2007-05-14 09:19 MST ------- Just an idea: if -allow-limited-size is used, multiple entries could be included in the ISO9660 filesystem, with .0 .1 .2 added, which point to various parts of the oversized files, so that concatenating them would reveal the full size. This way one could address the complete data even on only ISO9660 aware systems, and the filesystem would be IMHO less broken. Thanks for working on this! -- Configure bugmail: https://bugzilla.novell.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are on the CC list for the bug, or are watching someone who is.
https://bugzilla.novell.com/show_bug.cgi?id=230254 ------- Comment #9 from mfabian@novell.com 2007-05-16 06:37 MST ------- I have tried the '-allow-limited-size' of genisoimage now, it seems to work well. I could write DVDs with files over 4GB now, apparently without any problems. Thank you! -- Configure bugmail: https://bugzilla.novell.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are on the CC list for the bug, or are watching someone who is.
https://bugzilla.novell.com/show_bug.cgi?id=230254 ------- Comment #10 from mhopf@novell.com 2007-05-16 09:56 MST ------- I have verified reading a file of almost size 4GB on windows, and it worked fine. I haven't tested a file >4GB yet, which will be the major test. -- Configure bugmail: https://bugzilla.novell.com/userprefs.cgi?tab=email ------- You are receiving this mail because: ------- You are on the CC list for the bug, or are watching someone who is.
https://bugzilla.novell.com/show_bug.cgi?id=230254
User mhopf@novell.com added comment
https://bugzilla.novell.com/show_bug.cgi?id=230254#c11
--- Comment #11 from Matthias Hopf
participants (1)
-
bugzilla_noreply@novell.com