mixed read/write operation
Hello Is it possible to exclude mixed write/read operations with pktcdvd ? For example when i move multiple small files from CDRW disk to harddrive mixed read /write transactions occurs. This degrades overall performance and potentially can destroy laser head of CDRW drive. Can be in pktcdvd code implemented mechanism similar to semaphore: stop write transactions when we have read one and contra ? Thanks. Regards, Sergiy Kudryk. __________________________________________________ Do You Yahoo!? Yahoo! Autos - Get free new car price quotes http://autos.yahoo.com
On Wed, 17 Jul 2002, Sergiy Kudryk wrote:
Is it possible to exclude mixed write/read operations with pktcdvd ?
For example when i move multiple small files from CDRW disk to harddrive mixed read /write transactions occurs. This degrades overall performance and potentially can destroy laser head of CDRW drive.
Can be in pktcdvd code implemented mechanism similar to semaphore: stop write transactions when we have read one and contra ?
This is what I wrote in an earlier mail: I think the speed issue is caused by two things: The udf filesystem seems to be inefficient at handling many small files. I don't know if that's caused by the current implementation or by something in the udf specification that requires such behavior. The pktcdvd module is bypassing the the I/O elevator when creating write requests for the CDRW drive. This can make performance really suffer when there is a mixed read/write load. The 2.5 version of pktcdvd has fixed this problem, but a backport is not easy because it relies heavily on the new bio infrastructure in 2.5. I still think that's true, but I have some more information on the udf filesystem behavior. If I create a new udf filesystem and start adding lots of small files to it, at first (before dirty data writeback starts) the speed at which files are added is limited by how fast data can be *read* from the CD. I haven't looked at the udf code yet, but I would guess the udf filesystem reads a disk block for each file being added. Even if the caches are flushed between mkudf and mount, there should be no reason to read more data than mkudf wrote, so I think either the buffer/page cache doesn't work as intended for this case, or the udf filesystem reads disk blocks containing uninitialized data. The ext2 filesystem doesn't have this property. It adds files at full speed without reading anything from the disk, until all your memory fills up with dirty data. When this happens the system becomes very unresponsive (as in mouse pointer in X freezes for minutes), because every process that tries to allocate memory is put to sleep. So the performance problem in the udf filesystem is hiding another problem in the virtual memory subsystem. The VM should only allow a fraction of the total RAM to be used for dirty data belonging to a slow block device. -- Peter Osterlund - petero2@telia.com http://w1.894.telia.com/~u89404340
Wayde Milas wrote:
On Wednesday 17 July 2002 04:09 pm, Ben Fennema wrote: /> Just a though, but try:/ />/ /> dd if=/dev/zero of=/dev/sr0 bs=2048 count=5120/ /> mkudffs /dev/sr0 5120/ />/ /> mount/ /> try and copy stuff/ />/ /> and see if you have any better luck./
Nopers. The create worked, but the same problem when I tried to copy the mkudffs dir over: cp -a mkudffs /mnt/cdrom cp: cannot create directory `/mnt/cdrom/mkudffs': Input/output error
you have of course mounted it as rw?
dmesg: UDF-fs INFO UDF 0.9.6-rw (2002/03/11) Mounting volume 'LinuxUDF', timestamp 2002/07/17 19:27 (1ed4) I/O error: dev 0b:00, sector 1096
i have a similar problem, though i am actually able to write what seems to be a random number of MBs for each time i mount the drive. my dmesg at mount: UDF-fs DEBUG ../linux-2.4/lowlevel.c:57:udf_get_last_session: XA disk: no, vol_desc_start=0 UDF-fs DEBUG ../linux-2.4/super.c:1434:udf_read_super: Multi-session=0 UDF-fs DEBUG ../linux-2.4/super.c:406:udf_vrs: Starting at sector 16 (2048 byte sectors) UDF-fs DEBUG ../linux-2.4/super.c:766:udf_load_pvoldesc: recording time 1027029029/953030, 2002/07/18 14:50 (1e5c) UDF-fs DEBUG ../linux-2.4/super.c:776:udf_load_pvoldesc: volIdent[] = 'LinuxUDF' UDF-fs DEBUG ../linux-2.4/super.c:783:udf_load_pvoldesc: volSetIdent[] = '3d373825LinuxUDF' UDF-fs DEBUG ../linux-2.4/super.c:975:udf_load_logicalvol: Partition (0:0) type 1 on volume 1 UDF-fs DEBUG ../linux-2.4/super.c:985:udf_load_logicalvol: FileSet found in LogicalVolDesc at block=141, partition=0 UDF-fs DEBUG ../linux-2.4/super.c:813:udf_load_partdesc: Searching map: (0 == 0) UDF-fs DEBUG ../linux-2.4/super.c:854:udf_load_partdesc: unallocSpaceBitmap (part 0) @ 0 UDF-fs DEBUG ../linux-2.4/super.c:895:udf_load_partdesc: Partition (0:0 type 1511) starts at physical 274, block length 2294573 UDF-fs DEBUG ../linux-2.4/super.c:1228:udf_load_partition: Using anchor in block 256 UDF-fs DEBUG ../linux-2.4/super.c:1461:udf_read_super: Lastblock=0 UDF-fs DEBUG ../linux-2.4/super.c:738:udf_find_fileset: Fileset at block=141, partition=0 UDF-fs DEBUG ../linux-2.4/super.c:799:udf_load_fileset: Rootdir at block=143, partition=0 UDF-fs INFO UDF 0.9.6 (2002/03/14) Mounting volume 'LinuxUDF', timestamp 2002/07/18 14:50 (1e5c) -- i the try to copy a 500MB file onto the disk: # cp -i cd3.iso /mnt/dvd+rw/ cp: /mnt/dvd+rw/cd3.iso: No space left on device # my dmesg at this point is: I/O error: dev 0b:00, sector 1108 I/O error: dev 0b:00, sector 1108 the size of the file written to the disk is ~99MB. if i now unmount the drive and remount it, i can write some additional bytes to it. For each time i do this, the lost+found directory appears and disappears on an apparently random basis. :( gustavo -- guStaSo ZaeRa | Software Engineer BRE Systems LLC. 1532 State Street Suite C Santa Barbara, Ca 93101 gzaera@bresystems.com www.bresystems.com "My goal is to make any information available from anywhere, at anytime."
On Wed, Jul 17, 2002 at 03:02:02PM -0700, guStaVo ZaeRa wrote:
Wayde Milas wrote:
On Wednesday 17 July 2002 04:09 pm, Ben Fennema wrote: /> Just a though, but try:/ />/ /> dd if=/dev/zero of=/dev/sr0 bs=2048 count=5120/ /> mkudffs /dev/sr0 5120/ />/ /> mount/ /> try and copy stuff/ />/ /> and see if you have any better luck./
Nopers. The create worked, but the same problem when I tried to copy the mkudffs dir over: cp -a mkudffs /mnt/cdrom cp: cannot create directory `/mnt/cdrom/mkudffs': Input/output error
you have of course mounted it as rw?
dmesg: UDF-fs INFO UDF 0.9.6-rw (2002/03/11) Mounting volume 'LinuxUDF', timestamp 2002/07/17 19:27 (1ed4) I/O error: dev 0b:00, sector 1096
i have a similar problem, though i am actually able to write what seems to be a random number of MBs for each time i mount the drive.
-- i the try to copy a 500MB file onto the disk:
# cp -i cd3.iso /mnt/dvd+rw/ cp: /mnt/dvd+rw/cd3.iso: No space left on device #
my dmesg at this point is: I/O error: dev 0b:00, sector 1108 I/O error: dev 0b:00, sector 1108
the size of the file written to the disk is ~99MB.
if i now unmount the drive and remount it, i can write some additional bytes to it. For each time i do this, the lost+found directory appears and disappears on an apparently random basis.
Out of curiosity, is the background format complete when you try and mount the disc, or is it ongoing? (maybe every time you mount/umount the disc, more of it has been formated so you can use more...) If you say, dd if=/dev/zero of=/dev/sr0 bs=2048 count=(however many blocks on the dvd+rw), then run mkudffs and mount it, can you write the whole file right away? Ben
I know for me the background format is definately complete.... and the dd thingy doesnt help.
Out of curiosity, is the background format complete when you try and mount the disc, or is it ongoing? (maybe every time you mount/umount the disc, more of it has been formated so you can use more...)
If you say, dd if=/dev/zero of=/dev/sr0 bs=2048 count=(however many blocks on the dvd+rw), then run mkudffs and mount it, can you write the whole file right away?
Ben
Wayde
On Thu, Jul 18, 2002 at 08:59:47PM -0500, Wayde Milas wrote:
I know for me the background format is definately complete.... and the dd thingy doesnt help.
Out of curiosity, are you using a newer +R compatible drive, or the older +RW only drive? (As far as I know, their all rebranded Ricoh drives) Latest firmware? Ben
On Thursday 18 July 2002 09:36 pm, Ben Fennema wrote:
On Thu, Jul 18, 2002 at 08:59:47PM -0500, Wayde Milas wrote:
I know for me the background format is definately complete.... and the dd thingy doesnt help.
Out of curiosity, are you using a newer +R compatible drive, or the older +RW only drive? (As far as I know, their all rebranded Ricoh drives)
Latest firmware?
Ben
I'm using a Sony DRU-120A. As far as i know its a new device (Just bought it new off of googlegear.com last month) It reads and writes DVD +R. As far as the firmware, how would i check it? hdparm or soemthing (Cant test it right now cause its sitting at home :) -- Wayde Milas Rarcoa (630) 654-2580
Am Fre, 2002-07-19 um 16.25 schrieb Wayde Milas:
On Thursday 18 July 2002 09:36 pm, Ben Fennema wrote:
On Thu, Jul 18, 2002 at 08:59:47PM -0500, Wayde Milas wrote:
I know for me the background format is definately complete.... and the dd thingy doesnt help.
Out of curiosity, are you using a newer +R compatible drive, or the older +RW only drive? (As far as I know, their all rebranded Ricoh drives)
Latest firmware?
Ben
I'm using a Sony DRU-120A. As far as i know its a new device (Just bought it new off of googlegear.com last month) It reads and writes DVD +R.
As far as the firmware, how would i check it? hdparm or soemthing (Cant test it right now cause its sitting at home :)
You should get the revision from BIOS when booting or with "hdparm -I /dev/hdX". With Sony you have a firmware-problem. They do not provide new ones. Only registered dealers of Sony do the updates, which are free while warranty and are to be paid after warranty is over. You need to send in your writer for some days or weeks (That's why I won't buy Sony again). Rene
On Friday 19 July 2002 09:57 am, Rene Bartsch wrote:
Am Fre, 2002-07-19 um 16.25 schrieb Wayde Milas:
On Thursday 18 July 2002 09:36 pm, Ben Fennema wrote:
On Thu, Jul 18, 2002 at 08:59:47PM -0500, Wayde Milas wrote:
I know for me the background format is definately complete.... and the dd thingy doesnt help.
Out of curiosity, are you using a newer +R compatible drive, or the older +RW only drive? (As far as I know, their all rebranded Ricoh drives)
Latest firmware?
Ben
I'm using a Sony DRU-120A. As far as i know its a new device (Just bought it new off of googlegear.com last month) It reads and writes DVD +R.
As far as the firmware, how would i check it? hdparm or soemthing (Cant test it right now cause its sitting at home :)
You should get the revision from BIOS when booting or with "hdparm -I /dev/hdX".
With Sony you have a firmware-problem. They do not provide new ones. Only registered dealers of Sony do the updates, which are free while warranty and are to be paid after warranty is over. You need to send in your writer for some days or weeks (That's why I won't buy Sony again).
Rene
I understand I may have a firmware problem if the bios is old, and now I understand that i cant update it, but per my previous post, I'm not even aware if its actually possible to create a udf filesystem on a dvd rw then mount it read write in linux. If it is, I'll persue the bios update farther. Wayde
--- Wayde Milas
On Thursday 18 July 2002 09:36 pm, Ben Fennema wrote:
On Thu, Jul 18, 2002 at 08:59:47PM -0500, Wayde Milas wrote:
I know for me the background format is definately complete.... and the dd thingy doesnt help.
Out of curiosity, are you using a newer +R compatible drive, or the older +RW only drive? (As far as I know, their all rebranded Ricoh drives)
Latest firmware?
Ben
I'm using a Sony DRU-120A. As far as i know its a new device (Just bought it new off of googlegear.com last month) It reads and writes DVD +R.
As far as the firmware, how would i check it? hdparm or soemthing (Cant test it right now cause its sitting at home :)
Maybe it's bug in your drive firmware (microcode).. Check for firmware updates for your model at next page : http://sony.storagesupport.com/dvdrw/dru120a_dwn.htm Also take attention that only CDRW drives (not all vendors) fully supported under Linux in packet writing mode.
-- Wayde Milas Rarcoa (630) 654-2580
Regards, Sergiy Kudryk. __________________________________________________ Do You Yahoo!? Yahoo! Autos - Get free new car price quotes http://autos.yahoo.com
On Friday 19 July 2002 10:09 am, Sergiy Kudryk wrote:
--- Wayde Milas
wrote: On Thursday 18 July 2002 09:36 pm, Ben Fennema wrote:
On Thu, Jul 18, 2002 at 08:59:47PM -0500, Wayde Milas wrote:
I know for me the background format is definately complete.... and the dd thingy doesnt help.
Out of curiosity, are you using a newer +R compatible drive, or the older +RW only drive? (As far as I know, their all rebranded Ricoh drives)
Latest firmware?
Ben
I'm using a Sony DRU-120A. As far as i know its a new device (Just bought it new off of googlegear.com last month) It reads and writes DVD +R.
As far as the firmware, how would i check it? hdparm or soemthing (Cant test it right now cause its sitting at home :)
Maybe it's bug in your drive firmware (microcode)..
Check for firmware updates for your model at next page :
http://sony.storagesupport.com/dvdrw/dru120a_dwn.htm
Also take attention that only CDRW drives (not all vendors) fully supported under Linux in packet writing mode.
Yes, Im aware that the packet interface is only aplicable to cd rw devices.. this whole thread started cause i was not aware that dvd rw are not packet devices. I realize this now :P However, I'm still a bit confused as to whether there is any way to write a udf or ext2 file system without first creating a loppback device, making the file system on it, then burning the whole big file in one shot with a dd on a dvd rw. Wayde
Am Sam, 2002-07-20 um 03.22 schrieb Wayde Milas:
On Friday 19 July 2002 10:09 am, Sergiy Kudryk wrote:
--- Wayde Milas
wrote: On Thursday 18 July 2002 09:36 pm, Ben Fennema wrote:
On Thu, Jul 18, 2002 at 08:59:47PM -0500, Wayde Milas wrote:
I know for me the background format is definately complete.... and the dd thingy doesnt help.
Out of curiosity, are you using a newer +R compatible drive, or the older +RW only drive? (As far as I know, their all rebranded Ricoh drives)
Latest firmware?
Ben
I'm using a Sony DRU-120A. As far as i know its a new device (Just bought it new off of googlegear.com last month) It reads and writes DVD +R.
As far as the firmware, how would i check it? hdparm or soemthing (Cant test it right now cause its sitting at home :)
Maybe it's bug in your drive firmware (microcode)..
Check for firmware updates for your model at next page :
http://sony.storagesupport.com/dvdrw/dru120a_dwn.htm
Also take attention that only CDRW drives (not all vendors) fully supported under Linux in packet writing mode.
Yes, Im aware that the packet interface is only aplicable to cd rw devices.. this whole thread started cause i was not aware that dvd rw are not packet devices. I realize this now :P
However, I'm still a bit confused as to whether there is any way to write a udf or ext2 file system without first creating a loppback device, making the file system on it, then burning the whole big file in one shot with a dd on a dvd rw.
It might work. My PIONEER DVR-A03 writes on CD-RW initalized with cdrtool in a other drive. I couldn't test DVDs, yet as I didn't find someone with a DVR-A04, which has quick-format-support by firmware. Rene
On Fri, Jul 19, 2002 at 08:22:31PM -0500, Wayde Milas wrote:
Yes, Im aware that the packet interface is only aplicable to cd rw devices.. this whole thread started cause i was not aware that dvd rw are not packet devices. I realize this now :P
However, I'm still a bit confused as to whether there is any way to write a udf or ext2 file system without first creating a loppback device, making the file system on it, then burning the whole big file in one shot with a dd on a dvd rw.
It should work =] for various firmware updates, see http://perso.club-internet.fr/farzeno/firmware/dvd/dvdrf.htm So what firmware version do you have? Ben
On Friday 19 July 2002 10:16 pm, Ben Fennema wrote:
On Fri, Jul 19, 2002 at 08:22:31PM -0500, Wayde Milas wrote:
Yes, Im aware that the packet interface is only aplicable to cd rw devices.. this whole thread started cause i was not aware that dvd rw are not packet devices. I realize this now :P
However, I'm still a bit confused as to whether there is any way to write a udf or ext2 file system without first creating a loppback device, making the file system on it, then burning the whole big file in one shot with a dd on a dvd rw.
It should work =]
for various firmware updates, see http://perso.club-internet.fr/farzeno/firmware/dvd/dvdrf.htm
So what firmware version do you have?
Ben
Model=SONY DVD+RW DRU-120A, FwRev=1.13, SerialNo=ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ Looking at that bios update page, it looks like its the most current bios :P Wayde
On Sat, Jul 20, 2002 at 08:04:00AM -0500, Wayde Milas wrote:
Model=SONY DVD+RW DRU-120A, FwRev=1.13, SerialNo=ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ
Looking at that bios update page, it looks like its the most current bios :P
Have you tried writing to it via ide-cd vs ide-scsi? Ben
On Monday 22 July 2002 12:55 pm, Ben Fennema wrote:
On Sat, Jul 20, 2002 at 08:04:00AM -0500, Wayde Milas wrote:
Model=SONY DVD+RW DRU-120A, FwRev=1.13, SerialNo=ÿÿÿÿÿÿÿÿÿÿÿÿÿÿÿ
Looking at that bios update page, it looks like its the most current bios :P
Have you tried writing to it via ide-cd vs ide-scsi?
Ben
I think I did as was not able to since the drive needs the scsi interface to function properly, but I'll try again tonight. -- Wayde Milas Rarcoa (630) 654-2580
Ben Fennema wrote:
On Wed, Jul 17, 2002 at 03:02:02PM -0700, guStaVo ZaeRa wrote:
Wayde Milas wrote:
On Wednesday 17 July 2002 04:09 pm, Ben Fennema wrote: /> Just a though, but try:/ />/ /> dd if=/dev/zero of=/dev/sr0 bs=2048 count=5120/ /> mkudffs /dev/sr0 5120/ />/ /> mount/ /> try and copy stuff/ />/ /> and see if you have any better luck./
Nopers. The create worked, but the same problem when I tried to copy the mkudffs dir over: cp -a mkudffs /mnt/cdrom cp: cannot create directory `/mnt/cdrom/mkudffs': Input/output error
you have of course mounted it as rw?
dmesg: UDF-fs INFO UDF 0.9.6-rw (2002/03/11) Mounting volume 'LinuxUDF', timestamp 2002/07/17 19:27 (1ed4) I/O error: dev 0b:00, sector 1096
i have a similar problem, though i am actually able to write what seems to be a random number of MBs for each time i mount the drive.
-- i the try to copy a 500MB file onto the disk:
# cp -i cd3.iso /mnt/dvd+rw/ cp: /mnt/dvd+rw/cd3.iso: No space left on device #
my dmesg at this point is: I/O error: dev 0b:00, sector 1108 I/O error: dev 0b:00, sector 1108
the size of the file written to the disk is ~99MB.
if i now unmount the drive and remount it, i can write some additional bytes to it. For each time i do this, the lost+found directory appears and disappears on an apparently random basis.
Out of curiosity, is the background format complete when you try and mount the disc, or is it ongoing? (maybe every time you mount/umount the disc, more of it has been formated so you can use more...)
i am pretty sure that it's all done. the formatting only takes a couple of seconds, right? this is the output: [root@ksi2 compile]# mkudffs --media-type=cdrw /dev/cdrom start=0, blocks=16, type=RESERVED start=16, blocks=3, type=VRS start=19, blocks=237, type=USPACE start=256, blocks=1, type=ANCHOR start=257, blocks=31, type=USPACE start=288, blocks=32, type=PVDS start=320, blocks=32, type=LVID start=352, blocks=32, type=STABLE start=384, blocks=1024, type=SSPACE start=1408, blocks=2293408, type=PSPACE start=2294816, blocks=31, type=USPACE start=2294847, blocks=1, type=ANCHOR start=2294848, blocks=160, type=USPACE start=2295008, blocks=32, type=STABLE start=2295040, blocks=32, type=RVDS start=2295072, blocks=31, type=USPACE start=2295103, blocks=1, type=ANCHOR
If you say, dd if=/dev/zero of=/dev/sr0 bs=2048 count=(however many blocks on the dvd+rw),
a question that might seem a little naive: how do i know the amount of blocks on the disk?
then run mkudffs and mount it, can you write the whole file right away?
i don't know yet. gustavo -- guStaSo ZaeRa | Software Engineer BRE Systems LLC. 1532 State Street Suite C Santa Barbara, Ca 93101 gzaera@bresystems.com http://www.bresystems.com "My goal is to make any information available from anywhere, at anytime."
On Wed, Jul 17, 2002 at 07:09:18PM -0700, guStaVo ZaeRa wrote:
[root@ksi2 compile]# mkudffs --media-type=cdrw /dev/cdrom start=0, blocks=16, type=RESERVED start=16, blocks=3, type=VRS start=19, blocks=237, type=USPACE start=256, blocks=1, type=ANCHOR start=257, blocks=31, type=USPACE start=288, blocks=32, type=PVDS start=320, blocks=32, type=LVID start=352, blocks=32, type=STABLE start=384, blocks=1024, type=SSPACE start=1408, blocks=2293408, type=PSPACE start=2294816, blocks=31, type=USPACE start=2294847, blocks=1, type=ANCHOR start=2294848, blocks=160, type=USPACE start=2295008, blocks=32, type=STABLE start=2295040, blocks=32, type=RVDS start=2295072, blocks=31, type=USPACE start=2295103, blocks=1, type=ANCHOR
2295104 =] Which is actually more than 4700000000 bytes =) Ben
On Thu, Jul 18, 2002 at 11:31:21PM +0200, Peter Osterlund wrote:
On Wed, 17 Jul 2002, Sergiy Kudryk wrote:
Is it possible to exclude mixed write/read operations with pktcdvd ?
For example when i move multiple small files from CDRW disk to harddrive mixed read /write transactions occurs. This degrades overall performance and potentially can destroy laser head of CDRW drive.
Can be in pktcdvd code implemented mechanism similar to semaphore: stop write transactions when we have read one and contra ?
This is what I wrote in an earlier mail:
I think the speed issue is caused by two things:
The udf filesystem seems to be inefficient at handling many small files. I don't know if that's caused by the current implementation or by something in the udf specification that requires such behavior.
The pktcdvd module is bypassing the the I/O elevator when creating write requests for the CDRW drive. This can make performance really suffer when there is a mixed read/write load. The 2.5 version of pktcdvd has fixed this problem, but a backport is not easy because it relies heavily on the new bio infrastructure in 2.5.
I still think that's true, but I have some more information on the udf filesystem behavior. If I create a new udf filesystem and start adding lots of small files to it, at first (before dirty data writeback starts) the speed at which files are added is limited by how fast data can be *read* from the CD. I haven't looked at the udf code yet, but I would guess the udf filesystem reads a disk block for each file being added.
Hmm, it could be an issue with data being embedded in the inode. The data would go through the page cache, but the inode would go through the buffer cache, and this could force a read or something funky like that. Mounting with -o noadinicb turns off this behavior. You could try it and see if it eliminates the extra reads. Ben
On Thu, 18 Jul 2002, Ben Fennema wrote:
On Thu, Jul 18, 2002 at 11:31:21PM +0200, Peter Osterlund wrote:
I still think that's true, but I have some more information on the udf filesystem behavior. If I create a new udf filesystem and start adding lots of small files to it, at first (before dirty data writeback starts) the speed at which files are added is limited by how fast data can be *read* from the CD. I haven't looked at the udf code yet, but I would guess the udf filesystem reads a disk block for each file being added.
Hmm, it could be an issue with data being embedded in the inode. The data would go through the page cache, but the inode would go through the buffer cache, and this could force a read or something funky like that.
Mounting with -o noadinicb turns off this behavior. You could try it and see if it eliminates the extra reads.
There are still reads. I'm not sure if it's exactly the same amount of reads, but I didn't notice any difference in performance. (But maybe you already know what the problem is. I got that impression from your other mail.) -- Peter Osterlund - petero2@telia.com http://w1.894.telia.com/~u89404340
On Fri, Jul 19, 2002 at 01:33:13PM +0200, Peter Osterlund wrote:
On Thu, 18 Jul 2002, Ben Fennema wrote:
On Thu, Jul 18, 2002 at 11:31:21PM +0200, Peter Osterlund wrote:
I still think that's true, but I have some more information on the udf filesystem behavior. If I create a new udf filesystem and start adding lots of small files to it, at first (before dirty data writeback starts) the speed at which files are added is limited by how fast data can be *read* from the CD. I haven't looked at the udf code yet, but I would guess the udf filesystem reads a disk block for each file being added.
Hmm, it could be an issue with data being embedded in the inode. The data would go through the page cache, but the inode would go through the buffer cache, and this could force a read or something funky like that.
Mounting with -o noadinicb turns off this behavior. You could try it and see if it eliminates the extra reads.
There are still reads. I'm not sure if it's exactly the same amount of reads, but I didn't notice any difference in performance.
(But maybe you already know what the problem is. I got that impression from your other mail.)
Ok, here's a patch which should fix the problem. Let me know if it shows a noticeable improvement (or if you can't get it to apply nicely to whatever kernel version your using) =] It's against the 2.4 cvs tree, and prolly only works on 2.4.18+ (since all the version checks got ripped out of the code, which is actually makes the patch as large as it is) Ben
On Fri, 19 Jul 2002, Ben Fennema wrote:
On Fri, Jul 19, 2002 at 01:33:13PM +0200, Peter Osterlund wrote:
On Thu, 18 Jul 2002, Ben Fennema wrote:
On Thu, Jul 18, 2002 at 11:31:21PM +0200, Peter Osterlund wrote:
I still think that's true, but I have some more information on the udf filesystem behavior. If I create a new udf filesystem and start adding lots of small files to it, at first (before dirty data writeback starts) the speed at which files are added is limited by how fast data can be *read* from the CD. I haven't looked at the udf code yet, but I would guess the udf filesystem reads a disk block for each file being added.
Hmm, it could be an issue with data being embedded in the inode. The data would go through the page cache, but the inode would go through the buffer cache, and this could force a read or something funky like that.
Mounting with -o noadinicb turns off this behavior. You could try it and see if it eliminates the extra reads.
There are still reads. I'm not sure if it's exactly the same amount of reads, but I didn't notice any difference in performance.
(But maybe you already know what the problem is. I got that impression from your other mail.)
Ok, here's a patch which should fix the problem.
Wow, that was quick.
Let me know if it shows a noticeable improvement (or if you can't get it to apply nicely to whatever kernel version your using) =]
Yes it works perfectly. I applied it to the 2.4 cvs tree then copied it into the 2.4.19-rc2 tree. Compilation complained about a missing i_bh struct member, but this patch fixed it: --- udf_fs_i.h.orig Fri Jul 19 23:06:35 2002 +++ udf_fs_i.h Fri Jul 19 21:53:43 2002 @@ -47,6 +47,7 @@ unsigned i_strat_4096 : 1; unsigned i_new_inode : 1; unsigned reserved : 26; + struct buffer_head *i_bh; }; #endif With your patch, there are no reads when adding files. It appears to be just as efficient as ext2. Good work! Unfortunately I can't really stress test this because of the deadlock bug in usb-storage under high memory pressure. -- Peter Osterlund - petero2@telia.com http://w1.894.telia.com/~u89404340
On Fri, Jul 19, 2002 at 11:21:54PM +0200, Peter Osterlund wrote:
Yes it works perfectly. I applied it to the 2.4 cvs tree then copied it into the 2.4.19-rc2 tree. Compilation complained about a missing i_bh struct member, but this patch fixed it:
--- udf_fs_i.h.orig Fri Jul 19 23:06:35 2002 +++ udf_fs_i.h Fri Jul 19 21:53:43 2002 @@ -47,6 +47,7 @@ unsigned i_strat_4096 : 1; unsigned i_new_inode : 1; unsigned reserved : 26; + struct buffer_head *i_bh; };
#endif
Oops.. forgot about that part of the tree =] i_new_inode goes away as well, so it becomes: Index: udf_fs_i.h =================================================================== RCS file: /cvsroot/linux-udf/udf/include/linux/udf_fs_i.h,v retrieving revision 1.23 diff -u -p -r1.23 udf_fs_i.h --- udf_fs_i.h 15 Mar 2002 06:47:25 -0000 1.23 +++ udf_fs_i.h 19 Jul 2002 21:37:33 -0000 @@ -30,6 +30,7 @@ typedef struct struct udf_inode_info { + struct buffer_head *i_bh; long i_umtime; long i_uctime; long i_crtime; @@ -45,8 +46,7 @@ struct udf_inode_info unsigned i_alloc_type : 3; unsigned i_extended_fe : 1; unsigned i_strat_4096 : 1; - unsigned i_new_inode : 1; - unsigned reserved : 26; + unsigned reserved : 27; #if LINUX_VERSION_CODE >= KERNEL_VERSION(2,5,3) struct inode vfs_inode; #endif
With your patch, there are no reads when adding files. It appears to be just as efficient as ext2. Good work!
Excellent! Ben
I too have applied Ben's patches and the time taken to write the kernel sources to a CDRW on a 32x12x40 burner has reduced dramatically (from > 50 minutes to < 5). Good work. But something else appears to have happened as well. For a week or two (and, I freely admit, without really knowing what i am looking for), I have been digging around seeing if I could find what might be causing the file corruption (2048 bytes of 0's) when large numbers of files (like the kernel sources) are written to a packet-formatted CDRW ). Having applied Ben's patches, that problem seems to have gone away too! This is what I am doing: cdrwtool -d /dev/sr0 -q pktsetup /dev/pktcdvd0 /dev/sr0 mount -t udf -o rw,noatime /dev/pktcdvd0 /cdrw cp -r linux /cdrw diff -ar --brief linux /cdrw/linux >> linux.diff At the end of this, linux.diff is empty My kernel is 2.4.19-rc3 with packet-cd 2.4.19-rc1 and updated with udf 0.9.6 with Ben's patches applied. Just to be sure, I've gone back to a 2.4.19-rc3 kernel without the udf patches, rerun the whole test again, and sure enough diff throws up faults again (14 files were different which is pretty typical in my experience). I can't see nything wrong with the way i am testing this. Is anyone else getting similar results please? Chris On Friday 19 July 2002 10:37 pm, Ben Fennema wrote:
On Fri, Jul 19, 2002 at 11:21:54PM +0200, Peter Osterlund wrote:
Yes it works perfectly. I applied it to the 2.4 cvs tree then copied it into the 2.4.19-rc2 tree. Compilation complained about a missing i_bh struct member, but this patch fixed it:
--- udf_fs_i.h.orig Fri Jul 19 23:06:35 2002 +++ udf_fs_i.h Fri Jul 19 21:53:43 2002 @@ -47,6 +47,7 @@ unsigned i_strat_4096 : 1; unsigned i_new_inode : 1; unsigned reserved : 26; + struct buffer_head *i_bh; };
#endif
Oops.. forgot about that part of the tree =]
i_new_inode goes away as well, so it becomes:
Index: udf_fs_i.h =================================================================== RCS file: /cvsroot/linux-udf/udf/include/linux/udf_fs_i.h,v retrieving revision 1.23 diff -u -p -r1.23 udf_fs_i.h --- udf_fs_i.h 15 Mar 2002 06:47:25 -0000 1.23 +++ udf_fs_i.h 19 Jul 2002 21:37:33 -0000 @@ -30,6 +30,7 @@ typedef struct
struct udf_inode_info { + struct buffer_head *i_bh; long i_umtime; long i_uctime; long i_crtime; @@ -45,8 +46,7 @@ struct udf_inode_info unsigned i_alloc_type : 3; unsigned i_extended_fe : 1; unsigned i_strat_4096 : 1; - unsigned i_new_inode : 1; - unsigned reserved : 26; + unsigned reserved : 27; #if LINUX_VERSION_CODE >= KERNEL_VERSION(2,5,3) struct inode vfs_inode; #endif
With your patch, there are no reads when adding files. It appears to be just as efficient as ext2. Good work!
Excellent!
Ben
On Sat, 20 Jul 2002, Chris Clayton wrote:
I too have applied Ben's patches and the time taken to write the kernel sources to a CDRW on a 32x12x40 burner has reduced dramatically (from > 50 minutes to < 5). Good work.
But something else appears to have happened as well. For a week or two (and, I freely admit, without really knowing what i am looking for), I have been digging around seeing if I could find what might be causing the file corruption (2048 bytes of 0's) when large numbers of files (like the kernel sources) are written to a packet-formatted CDRW ). Having applied Ben's patches, that problem seems to have gone away too!
I think this is just a happy coincidence. I guess there exists a particular sequence of read/write commands that makes the packet driver confused if the timing is exactly right, and Ben's udf optimization just makes that much less likely to happen. Remember that I saw this corruption also when using the ext2 filesystem. Anyway, investigating this corruption problem is on my todo list. Did you always see only 0's in the corrupted data? I remember I saw pieces from other files and seemingly random data. -- Peter Osterlund - petero2@telia.com http://w1.894.telia.com/~u89404340
On Saturday 20 Jul 2002 10:39 pm, Peter Osterlund wrote:
Remember that I saw this corruption also when using the ext2 filesystem. Anyway, investigating this corruption problem is on my todo list.
OK, too have tried ext2 but the problem has never shown up.
Did you always see only 0's in the corrupted data? I remember I saw pieces from other files and seemingly random data.
As far as I have investigated so far, it is always 0's. Most frequently, it is the first 2048 bytes which are set to zero, unless the file is less than 2048 bytes in size in which case it will be the whole file. Have, though seen some with 0's at the end of the file, starting at the final 2048 byte boundary for the remainder of the file. Can't recall now whether I have seen any with 0's in the middle of a file. Later today I will do some more tests and use hexdump and diff to get some better information for you. Chris
On Saturday 20 July 2002 10:39 pm, Peter Osterlund wrote:
Did you always see only 0's in the corrupted data? I remember I saw pieces from other files and seemingly random data.
Peter, I said I would do some more tests and I have done so. Not sure how much it helps but this is what I observe. Using the test I described in my message yesterday: 17 files were corrupted. In all cases, data had been replaced by 0's. 11 of the files had their first 2048 bytes set to 0 and of these 3 had second chunks of corruption. One, which was 3337 bytes in size, had the remainder of the file also set to 0 (i.e. it was all 0's). the other 2 had their final chunks, one starting at 0x6800 and the other at 0x1800, set to 0's. 5 files had their final chunks, starting at 0x1000, filled with 0's. 1 file had its final chunk, starting at 0x2000, filled with 0's. From these tests it would appear that it is always the first and final chunks of the file which are corrupted and sometimes it's both. The corruption always starts on a 2048 byte boundary, but I think we already knew that. From some notes I have from earlier tests, I can say for sure that different files are corrupted on each test I've kept the hexdumps of the pre and post copied files and the diffs from those hexdumps, and can tar them and put them in some free webspace for retrieval if they would be of any use to anyone. Also, more than happy to try any other tests that might be useful. Regards, Chris
On Saturday 20 July 2002 10:39 pm, Peter Osterlund wrote:
On Sat, 20 Jul 2002, Chris Clayton wrote:
I too have applied Ben's patches and the time taken to write the kernel sources to a CDRW on a 32x12x40 burner has reduced dramatically (from > 50 minutes to < 5). Good work.
But something else appears to have happened as well. For a week or two (and, I freely admit, without really knowing what i am looking for), I have been digging around seeing if I could find what might be causing the file corruption (2048 bytes of 0's) when large numbers of files (like the kernel sources) are written to a packet-formatted CDRW ). Having applied Ben's patches, that problem seems to have gone away too!
I think this is just a happy coincidence. I guess there exists a particular sequence of read/write commands that makes the packet driver confused if the timing is exactly right, and Ben's udf optimization just makes that much less likely to happen.
Remember that I saw this corruption also when using the ext2 filesystem. Anyway, investigating this corruption problem is on my todo list.
Did you always see only 0's in the corrupted data? I remember I saw pieces from other files and seemingly random data.
I've since done several more runs of the tests I outlined a few days ago. I have now experienced all the problems Peter has mentioned, corruptions on an ext2 filesystem created on a CDRW, pieces of other files embedded in the "target" file, and corruption in the middle of files with good data before and after. But to get these symptoms on ext2 I had to vary the test a bit. I copied the linux sources to the CDRW and diff'd them and they were OK (i.e. no corruption). But this time, instead of blanking the disk, I just copied the sources over the top of the first copy and this time I got 48 corrupted files - a new, all-time personal best for me :) All of the above is with a kernel without Ben's UDF patch.
This is what I am doing:
cdrwtool -d /dev/sr0 -q pktsetup /dev/pktcdvd0 /dev/sr0 mount -t udf -o rw,noatime /dev/pktcdvd0 /cdrw cp -r linux /cdrw diff -ar --brief linux /cdrw/linux >> linux.diff
I try to do the same, and get the following errors. could someone tell
me what i'm doing wrong?
[root@ksi4 /dev]# cdrwtool -d /dev/cdrom -q
using device /dev/cdrom
1280KB internal buffer
setting write speed to 12x
Settings for /dev/cdrom:
Fixed packets, size 32
Mode-2 disc
I'm going to do a quick setup of /dev/cdrom. The disc is going to be
blanked and
formatted with one big track. All data on the device will be lost!!
Press CTRL-C
to cancel now.
ENTER to continue.
Initiating quick disc blank
wait_cmd: Invalid argument
Command failed: a1 01 00 00 00 00 00 00 00 00 00 00 - sense 05.30.02
blank disc: Invalid argument
--
If I use Andy's dvd formatting tool (dvd+rw-format) 'dvd+rw-format -f
/dev/cdrom' I get this:
[root@ksi4 /root]# dvd+rw-format -f /dev/cdrom
* DVD+RW format utility by
On Mon, Jul 22, 2002 at 10:47:42AM -0700, guStaVo ZaeRa wrote:
I try to do the same, and get the following errors. could someone tell me what i'm doing wrong?
[root@ksi4 /dev]# cdrwtool -d /dev/cdrom -q
That only works for CDRW's
If I use Andy's dvd formatting tool (dvd+rw-format) 'dvd+rw-format -f /dev/cdrom' I get this:
[root@ksi4 /root]# dvd+rw-format -f /dev/cdrom * DVD+RW format utility by
, version 2.0. * 4.7GB DVD+RW media detected. * formatting 1.6\ one question: why does the formatting only go to 1.6%?
Cause it's supposed to auto extend the format as you write to the disc. Thats why I suggested dd-ing the whole disc, then trying to mount it and copy the file. Then you'd know the whole disc had been formated, and it wasn't a problem with the format extension. Ben
I try to do the same, and get the following errors. could someone tell me what i'm doing wrong?
[root@ksi4 /dev]# cdrwtool -d /dev/cdrom -q
That only works for CDRW's
oh.. hehe... now i feel a bit silly..
If I use Andy's dvd formatting tool (dvd+rw-format) 'dvd+rw-format -f /dev/cdrom' I get this:
[root@ksi4 /root]# dvd+rw-format -f /dev/cdrom * DVD+RW format utility by
, version 2.0. * 4.7GB DVD+RW media detected. * formatting 1.6\ one question: why does the formatting only go to 1.6%?
Cause it's supposed to auto extend the format as you write to the disc.
Thats why I suggested dd-ing the whole disc, then trying to mount it and copy the file. Then you'd know the whole disc had been formated, and it wasn't a problem with the format extension.
Ben
I've done as you suggested : dd if=/dev/zero of=/dev/sr0 bs=2048 count=2295104 but since i don't have sr0 in my /dev dir, i assume it's supposed to be scd0. [root@ksi4 /root]# dd if=/dev/zero of=/dev/scd0 bs=2048 count=2295104 dd: /dev/scd0: Read-only file system So then i try to do this : [root@ksi4 /root]# mkudffs --media-type=cdrw /dev/cdrom trying to change type of multiple extents -- As you can probably see, i'm a bit confused with the whole thing. What should i do now? cheers, guStaVo
On Mon, Jul 22, 2002 at 11:28:35AM -0700, guStaVo ZaeRa wrote:
I've done as you suggested :
dd if=/dev/zero of=/dev/sr0 bs=2048 count=2295104
but since i don't have sr0 in my /dev dir, i assume it's supposed to be scd0.
[root@ksi4 /root]# dd if=/dev/zero of=/dev/scd0 bs=2048 count=2295104 dd: /dev/scd0: Read-only file system
So then i try to do this :
[root@ksi4 /root]# mkudffs --media-type=cdrw /dev/cdrom trying to change type of multiple extents
Wasn't that working before? I'm confused.. You've applied the kernel patch for DVD+RW, right? Ben
Ben Fennema wrote:
On Mon, Jul 22, 2002 at 11:28:35AM -0700, guStaVo ZaeRa wrote:
I've done as you suggested :
dd if=/dev/zero of=/dev/sr0 bs=2048 count=2295104
but since i don't have sr0 in my /dev dir, i assume it's supposed to be scd0.
[root@ksi4 /root]# dd if=/dev/zero of=/dev/scd0 bs=2048 count=2295104 dd: /dev/scd0: Read-only file system
So then i try to do this :
[root@ksi4 /root]# mkudffs --media-type=cdrw /dev/cdrom trying to change type of multiple extents
Wasn't that working before? I'm confused.. You've applied the kernel patch for DVD+RW, right?
Ben
believe me, i'm pretty confused as well.. yes, it was working before, but i tried to do several things, and now, i can't make it work again. so, i'm going to start from scratch. i've downloaded the 2.4.17 kernel source, and applied the corresponding UDF patch. Then, i compile the new kernel with 'make dep clean bzImage modules modules_install'. Then cp bzImage to /boot and run lilo. Then reboot. When i boot up with the new kernel, the UDF module is loaded, but when i try to do the dd command you suggested, i get the error i mentioned before. what i'm i forgetting? gustavo -- guStaSo ZaeRa | Software Engineer BRE Systems LLC. 1532 State Street Suite C Santa Barbara, Ca 93101 gzaera@bresystems.com http://www.bresystems.com "My goal is to make any information available from anywhere, at anytime."
On Mon, Jul 22, 2002 at 12:44:55PM -0700, guStaVo ZaeRa wrote:
believe me, i'm pretty confused as well.. yes, it was working before, but i tried to do several things, and now, i can't make it work again.
so, i'm going to start from scratch.
i've downloaded the 2.4.17 kernel source, and applied the corresponding UDF patch. Then, i compile the new kernel with 'make dep clean bzImage modules modules_install'. Then cp bzImage to /boot and run lilo. Then reboot. When i boot up with the new kernel, the UDF module is loaded, but when i try to do the dd command you suggested, i get the error i mentioned before. what i'm i forgetting?
You need the DVD+RW patch as well... Were you using ide-scsi or ide-cd before? (whats /dev/cdrom linked to) Ben
On Monday 22 July 2002 02:17 pm, Ben Fennema wrote:
On Mon, Jul 22, 2002 at 11:28:35AM -0700, guStaVo ZaeRa wrote:
I've done as you suggested :
dd if=/dev/zero of=/dev/sr0 bs=2048 count=2295104
but since i don't have sr0 in my /dev dir, i assume it's supposed to be scd0.
[root@ksi4 /root]# dd if=/dev/zero of=/dev/scd0 bs=2048 count=2295104 dd: /dev/scd0: Read-only file system
So then i try to do this :
[root@ksi4 /root]# mkudffs --media-type=cdrw /dev/cdrom trying to change type of multiple extents
Wasn't that working before? I'm confused.. You've applied the kernel patch for DVD+RW, right?
Ben
Ben, I think you are confusing Gustavo with me. We seem to be having the same problem.. 2 different threads but the same exact symptoms might have confused you. :P -- Wayde Milas Rarcoa (630) 654-2580
On Mon, Jul 22, 2002 at 03:18:59PM -0500, Wayde Milas wrote:
Ben, I think you are confusing Gustavo with me. We seem to be having the same problem.. 2 different threads but the same exact symptoms might have confused you. :P
No, he had it working.. cept it kepy bombing out after some time copying a large file... and every time he remounted, he could copy more data =] Ben
On Monday 22 July 2002 03:39 pm, Ben Fennema wrote:
On Mon, Jul 22, 2002 at 03:18:59PM -0500, Wayde Milas wrote:
Ben, I think you are confusing Gustavo with me. We seem to be having the same problem.. 2 different threads but the same exact symptoms might have confused you. :P
No, he had it working.. cept it kepy bombing out after some time copying a large file... and every time he remounted, he could copy more data =]
Ben
Oh. Now I remember. :P Wayde
This is what I am doing:
cdrwtool -d /dev/sr0 -q pktsetup /dev/pktcdvd0 /dev/sr0 mount -t udf -o rw,noatime /dev/pktcdvd0 /cdrw cp -r linux /cdrw diff -ar --brief linux /cdrw/linux >> linux.diff
I try to do the same, and get the following errors. could someone tell
me what i'm doing wrong?
[root@ksi4 /dev]# cdrwtool -d /dev/cdrom -q
using device /dev/cdrom
1280KB internal buffer
setting write speed to 12x
Settings for /dev/cdrom:
Fixed packets, size 32
Mode-2 disc
I'm going to do a quick setup of /dev/cdrom. The disc is going to be
blanked and
formatted with one big track. All data on the device will be lost!!
Press CTRL-C
to cancel now.
ENTER to continue.
Initiating quick disc blank
wait_cmd: Invalid argument
Command failed: a1 01 00 00 00 00 00 00 00 00 00 00 - sense 05.30.02
blank disc: Invalid argument
--
If I use Andy's dvd formatting tool (dvd+rw-format) 'dvd+rw-format -f
/dev/cdrom' I get this:
[root@ksi4 /root]# dvd+rw-format -f /dev/cdrom
* DVD+RW format utility by
On Fri, 19 Jul 2002, Peter Osterlund wrote:
On Fri, 19 Jul 2002, Ben Fennema wrote:
On Fri, Jul 19, 2002 at 01:33:13PM +0200, Peter Osterlund wrote:
On Thu, 18 Jul 2002, Ben Fennema wrote:
On Thu, Jul 18, 2002 at 11:31:21PM +0200, Peter Osterlund wrote:
I still think that's true, but I have some more information on the udf filesystem behavior. If I create a new udf filesystem and start adding lots of small files to it, at first (before dirty data writeback starts) the speed at which files are added is limited by how fast data can be *read* from the CD. I haven't looked at the udf code yet, but I would guess the udf filesystem reads a disk block for each file being added.
Hmm, it could be an issue with data being embedded in the inode. The data would go through the page cache, but the inode would go through the buffer cache, and this could force a read or something funky like that.
Ok, here's a patch which should fix the problem.
Let me know if it shows a noticeable improvement (or if you can't get it to apply nicely to whatever kernel version your using) =]
Yes it works perfectly. I applied it to the 2.4 cvs tree then copied it into the 2.4.19-rc2 tree.
I needed this patch for 2.5.29 too so I made a forward port. It appears to work fine and gives a big performance increase also in 2.5. Can you please have a look at the patch and maybe submit it to Linus if you think it's OK? Thanks. diff -u -r -N ../../linus/main/linux/fs/udf/balloc.c linux/fs/udf/balloc.c --- ../../linus/main/linux/fs/udf/balloc.c Sat Jul 27 19:30:32 2002 +++ linux/fs/udf/balloc.c Sun Jul 28 15:14:36 2002 @@ -592,7 +592,7 @@ sptr = (obh)->b_data + nextoffset; nextoffset = sizeof(struct allocExtDesc); - if (memcmp(&UDF_I_LOCATION(table), &obloc, sizeof(lb_addr))) + if (obh != UDF_I_BH(inode)) { aed = (struct allocExtDesc *)(obh)->b_data; aed->lengthAllocDescs = @@ -639,7 +639,7 @@ { udf_write_aext(table, nbloc, &nextoffset, eloc, elen, nbh, 1); - if (!memcmp(&UDF_I_LOCATION(table), &nbloc, sizeof(lb_addr))) + if (nbh == UDF_I_BH(table)) { UDF_I_LENALLOC(table) += adsize; mark_inode_dirty(table); diff -u -r -N ../../linus/main/linux/fs/udf/file.c linux/fs/udf/file.c --- ../../linus/main/linux/fs/udf/file.c Sat Jul 27 19:30:33 2002 +++ linux/fs/udf/file.c Sun Jul 28 15:14:36 2002 @@ -48,8 +48,6 @@ { struct inode *inode = page->mapping->host; - struct buffer_head *bh; - int block; char *kaddr; int err = 0; @@ -58,19 +56,9 @@ kaddr = kmap(page); memset(kaddr, 0, PAGE_CACHE_SIZE); - block = udf_get_lb_pblock(inode->i_sb, UDF_I_LOCATION(inode), 0); - bh = sb_bread(inode->i_sb, block); - if (!bh) - { - SetPageError(page); - err = -EIO; - goto out; - } - memcpy(kaddr, bh->b_data + udf_ext0_offset(inode), inode->i_size); - brelse(bh); + memcpy(kaddr, UDF_I_BH(inode)->b_data + udf_ext0_offset(inode), inode->i_size); flush_dcache_page(page); SetPageUptodate(page); -out: kunmap(page); unlock_page(page); return err; @@ -80,8 +68,6 @@ { struct inode *inode = page->mapping->host; - struct buffer_head *bh; - int block; char *kaddr; int err = 0; @@ -89,19 +75,9 @@ PAGE_BUG(page); kaddr = kmap(page); - block = udf_get_lb_pblock(inode->i_sb, UDF_I_LOCATION(inode), 0); - bh = sb_bread(inode->i_sb, block); - if (!bh) - { - SetPageError(page); - err = -EIO; - goto out; - } - memcpy(bh->b_data + udf_ext0_offset(inode), kaddr, inode->i_size); - mark_buffer_dirty(bh); - brelse(bh); + memcpy(UDF_I_BH(inode)->b_data + udf_ext0_offset(inode), kaddr, inode->i_size); + mark_buffer_dirty(UDF_I_BH(inode)); SetPageUptodate(page); -out: kunmap(page); unlock_page(page); return err; @@ -117,25 +93,13 @@ { struct inode *inode = page->mapping->host; - struct buffer_head *bh; - int block; char *kaddr = page_address(page); int err = 0; - block = udf_get_lb_pblock(inode->i_sb, UDF_I_LOCATION(inode), 0); - bh = sb_bread(inode->i_sb, block); - if (!bh) - { - SetPageError(page); - err = -EIO; - goto out; - } - memcpy(bh->b_data + udf_file_entry_alloc_offset(inode) + offset, + memcpy(UDF_I_BH(inode)->b_data + udf_file_entry_alloc_offset(inode) + offset, kaddr + offset, to - offset); - mark_buffer_dirty(bh); - brelse(bh); + mark_buffer_dirty(UDF_I_BH(inode)); SetPageUptodate(page); -out: kunmap(page); /* only one page here */ if (to > inode->i_size) @@ -232,7 +196,6 @@ unsigned long arg) { int result = -EINVAL; - struct buffer_head *bh = NULL; long_ad eaicb; uint8_t *ea = NULL; @@ -270,20 +233,11 @@ } /* ok, we need to read the inode */ - bh = udf_tread(inode->i_sb, - udf_get_lb_pblock(inode->i_sb, UDF_I_LOCATION(inode), 0)); - - if (!bh) - { - udf_debug("bread failed (inode=%ld)\n", inode->i_ino); - return -EIO; - } - if (UDF_I_EXTENDED_FE(inode) == 0) { struct fileEntry *fe; - fe = (struct fileEntry *)bh->b_data; + fe = (struct fileEntry *)UDF_I_BH(inode)->b_data; eaicb = lela_to_cpu(fe->extendedAttrICB); if (UDF_I_LENEATTR(inode)) ea = fe->extendedAttr; @@ -292,7 +246,7 @@ { struct extendedFileEntry *efe; - efe = (struct extendedFileEntry *)bh->b_data; + efe = (struct extendedFileEntry *)UDF_I_BH(inode)->b_data; eaicb = lela_to_cpu(efe->extendedAttrICB); if (UDF_I_LENEATTR(inode)) ea = efe->extendedAttr; @@ -310,7 +264,6 @@ break; } - udf_release_data(bh); return result; } diff -u -r -N ../../linus/main/linux/fs/udf/ialloc.c linux/fs/udf/ialloc.c --- ../../linus/main/linux/fs/udf/ialloc.c Sat Jul 27 19:30:33 2002 +++ linux/fs/udf/ialloc.c Sun Jul 28 15:14:36 2002 @@ -152,7 +152,12 @@ UDF_I_CRTIME(inode) = CURRENT_TIME; UDF_I_UMTIME(inode) = UDF_I_UCTIME(inode) = UDF_I_UCRTIME(inode) = CURRENT_UTIME; - UDF_I_NEW_INODE(inode) = 1; + UDF_I_BH(inode) = udf_tgetblk(sb, inode->i_ino); + lock_buffer(UDF_I_BH(inode)); + memset(UDF_I_BH(inode)->b_data, 0x00, sb->s_blocksize); + set_buffer_uptodate(UDF_I_BH(inode)); + unlock_buffer(UDF_I_BH(inode)); + udf_write_inode(inode, 0); insert_inode_hash(inode); mark_inode_dirty(inode); diff -u -r -N ../../linus/main/linux/fs/udf/inode.c linux/fs/udf/inode.c --- ../../linus/main/linux/fs/udf/inode.c Sat Jul 27 19:30:33 2002 +++ linux/fs/udf/inode.c Sun Jul 28 15:14:36 2002 @@ -122,6 +122,11 @@ clear_inode(inode); } +void udf_clear_inode(struct inode *inode) +{ + udf_release_data(UDF_I_BH(inode)); +} + void udf_discard_prealloc(struct inode * inode) { if (inode->i_size && inode->i_size != UDF_I_LENEXTENTS(inode) && @@ -162,10 +167,8 @@ void udf_expand_file_adinicb(struct inode * inode, int newsize, int * err) { - struct buffer_head *bh = NULL; struct page *page; char *kaddr; - int block; /* from now on we have normal address_space methods */ inode->i_data.a_ops = &udf_aops; @@ -180,10 +183,6 @@ return; } - block = udf_get_lb_pblock(inode->i_sb, UDF_I_LOCATION(inode), 0); - bh = udf_tread(inode->i_sb, block); - if (!bh) - return; page = grab_cache_page(inode->i_mapping, 0); if (!PageLocked(page)) PAGE_BUG(page); @@ -192,21 +191,20 @@ kaddr = kmap(page); memset(kaddr + UDF_I_LENALLOC(inode), 0x00, PAGE_CACHE_SIZE - UDF_I_LENALLOC(inode)); - memcpy(kaddr, bh->b_data + udf_file_entry_alloc_offset(inode), + memcpy(kaddr, UDF_I_BH(inode)->b_data + udf_file_entry_alloc_offset(inode), UDF_I_LENALLOC(inode)); flush_dcache_page(page); SetPageUptodate(page); kunmap(page); } - memset(bh->b_data + udf_file_entry_alloc_offset(inode), + memset(UDF_I_BH(inode)->b_data + udf_file_entry_alloc_offset(inode), 0, UDF_I_LENALLOC(inode)); UDF_I_LENALLOC(inode) = 0; if (UDF_QUERY_FLAG(inode->i_sb, UDF_FLAG_USE_SHORT_AD)) UDF_I_ALLOCTYPE(inode) = ICBTAG_FLAG_AD_SHORT; else UDF_I_ALLOCTYPE(inode) = ICBTAG_FLAG_AD_LONG; - mark_buffer_dirty_inode(bh, inode); - udf_release_data(bh); + mark_buffer_dirty_inode(UDF_I_BH(inode), inode); inode->i_data.a_ops->writepage(page); page_cache_release(page); @@ -217,7 +215,7 @@ struct buffer_head * udf_expand_dir_adinicb(struct inode *inode, int *block, int *err) { int newblock; - struct buffer_head *sbh = NULL, *dbh = NULL; + struct buffer_head *dbh = NULL; lb_addr bloc, eloc; uint32_t elen, extoffset; @@ -247,9 +245,6 @@ UDF_I_LOCATION(inode).partitionReferenceNum, 0); if (!newblock) return NULL; - sbh = udf_tread(inode->i_sb, inode->i_ino); - if (!sbh) - return NULL; dbh = udf_tgetblk(inode->i_sb, newblock); if (!dbh) return NULL; @@ -260,7 +255,7 @@ mark_buffer_dirty_inode(dbh, inode); sfibh.soffset = sfibh.eoffset = (f_pos & ((inode->i_sb->s_blocksize - 1) >> 2)) << 2; - sfibh.sbh = sfibh.ebh = sbh; + sfibh.sbh = sfibh.ebh = UDF_I_BH(inode); dfibh.soffset = dfibh.eoffset = 0; dfibh.sbh = dfibh.ebh = dbh; while ( (f_pos < size) ) @@ -268,7 +263,6 @@ sfi = udf_fileident_read(inode, &f_pos, &sfibh, &cfi, NULL, NULL, NULL, NULL, NULL, NULL); if (!sfi) { - udf_release_data(sbh); udf_release_data(dbh); return NULL; } @@ -279,14 +273,13 @@ if (udf_write_fi(inode, sfi, dfi, &dfibh, sfi->impUse, sfi->fileIdent + sfi->lengthOfImpUse)) { - udf_release_data(sbh); udf_release_data(dbh); return NULL; } } mark_buffer_dirty_inode(dbh, inode); - memset(sbh->b_data + udf_file_entry_alloc_offset(inode), + memset(UDF_I_BH(inode)->b_data + udf_file_entry_alloc_offset(inode), 0, UDF_I_LENALLOC(inode)); UDF_I_LENALLOC(inode) = 0; @@ -300,11 +293,10 @@ elen = inode->i_size; UDF_I_LENEXTENTS(inode) = elen; extoffset = udf_file_entry_alloc_offset(inode); - udf_add_aext(inode, &bloc, &extoffset, eloc, elen, &sbh, 0); + udf_add_aext(inode, &bloc, &extoffset, eloc, elen, &UDF_I_BH(inode), 0); /* UniqueID stuff */ - mark_buffer_dirty(sbh); - udf_release_data(sbh); + mark_buffer_dirty(UDF_I_BH(inode)); mark_inode_dirty(inode); return dbh; } @@ -724,7 +716,7 @@ if (elen > numalloc) { - laarr[c].extLength -= + laarr[i].extLength -= (numalloc << inode->i_sb->s_blocksize_bits); numalloc = 0; } @@ -876,13 +868,8 @@ offset = (inode->i_size & (inode->i_sb->s_blocksize - 1)) + udf_file_entry_alloc_offset(inode); - if ((bh = udf_tread(inode->i_sb, - udf_get_lb_pblock(inode->i_sb, UDF_I_LOCATION(inode), 0)))) - { - memset(bh->b_data + offset, 0x00, inode->i_sb->s_blocksize - offset); - mark_buffer_dirty(bh); - udf_release_data(bh); - } + memset(UDF_I_BH(inode)->b_data + offset, 0x00, inode->i_sb->s_blocksize - offset); + mark_buffer_dirty(UDF_I_BH(inode)); UDF_I_LENALLOC(inode) = inode->i_size; } } @@ -1019,7 +1006,6 @@ return; } udf_fill_inode(inode, bh); - udf_release_data(bh); } static void udf_fill_inode(struct inode *inode, struct buffer_head *bh) @@ -1030,8 +1016,6 @@ long convtime_usec; int offset, alen; - UDF_I_NEW_INODE(inode) = 0; - fe = (struct fileEntry *)bh->b_data; efe = (struct extendedFileEntry *)bh->b_data; @@ -1040,6 +1024,7 @@ else /* if (le16_to_cpu(fe->icbTag.strategyType) == 4096) */ UDF_I_STRAT4096(inode) = 1; + UDF_I_BH(inode) = bh; UDF_I_ALLOCTYPE(inode) = le16_to_cpu(fe->icbTag.flags) & ICBTAG_FLAG_AD_MASK; UDF_I_UMTIME(inode) = 0; UDF_I_UCTIME(inode) = 0; @@ -1307,7 +1292,7 @@ static int udf_update_inode(struct inode *inode, int do_sync) { - struct buffer_head *bh = NULL; + struct buffer_head *bh; struct fileEntry *fe; struct extendedFileEntry *efe; uint32_t udfperms; @@ -1317,8 +1302,7 @@ timestamp cpu_time; int err = 0; - bh = udf_tread(inode->i_sb, - udf_get_lb_pblock(inode->i_sb, UDF_I_LOCATION(inode), 0)); + bh = UDF_I_BH(inode); if (!bh) { @@ -1327,17 +1311,6 @@ } fe = (struct fileEntry *)bh->b_data; efe = (struct extendedFileEntry *)bh->b_data; - if (UDF_I_NEW_INODE(inode) == 1) - { - if (UDF_I_EXTENDED_FE(inode) == 0) - memset(bh->b_data, 0x00, sizeof(struct fileEntry)); - else - memset(bh->b_data, 0x00, sizeof(struct extendedFileEntry)); - memset(bh->b_data + udf_file_entry_alloc_offset(inode) + - UDF_I_LENALLOC(inode), 0x0, inode->i_sb->s_blocksize - - udf_file_entry_alloc_offset(inode) - UDF_I_LENALLOC(inode)); - UDF_I_NEW_INODE(inode) = 0; - } if (le16_to_cpu(fe->descTag.tagIdent) == TAG_IDENT_USE) { @@ -1357,7 +1330,6 @@ use->descTag.tagChecksum += ((uint8_t *)&(use->descTag))[i]; mark_buffer_dirty(bh); - udf_release_data(bh); return err; } @@ -1545,7 +1517,6 @@ err = -EIO; } } - udf_release_data(bh); return err; } @@ -1680,7 +1651,7 @@ sptr = (*bh)->b_data + *extoffset; *extoffset = sizeof(struct allocExtDesc); - if (memcmp(&UDF_I_LOCATION(inode), &obloc, sizeof(lb_addr))) + if (*bh != UDF_I_BH(inode)) { aed = (struct allocExtDesc *)(*bh)->b_data; aed->lengthAllocDescs = @@ -1731,7 +1702,7 @@ etype = udf_write_aext(inode, *bloc, extoffset, eloc, elen, *bh, inc); - if (!memcmp(&UDF_I_LOCATION(inode), bloc, sizeof(lb_addr))) + if (*bh == UDF_I_BH(inode)) { UDF_I_LENALLOC(inode) += adsize; mark_inode_dirty(inode); @@ -1797,7 +1768,7 @@ } } - if (memcmp(&UDF_I_LOCATION(inode), &bloc, sizeof(lb_addr))) + if (bh != UDF_I_BH(inode)) { if (!UDF_QUERY_FLAG(inode->i_sb, UDF_FLAG_STRICT) || UDF_SB_UDFREV(inode->i_sb) >= 0x0201) { @@ -1839,10 +1810,9 @@ tagIdent = le16_to_cpu(((tag *)(*bh)->b_data)->tagIdent); - if (!memcmp(&UDF_I_LOCATION(inode), bloc, sizeof(lb_addr))) + if (*bh == UDF_I_BH(inode)) { - if (tagIdent == TAG_IDENT_FE || tagIdent == TAG_IDENT_EFE || - UDF_I_NEW_INODE(inode)) + if (tagIdent == TAG_IDENT_FE || tagIdent == TAG_IDENT_EFE) { pos = udf_file_entry_alloc_offset(inode); alen = UDF_I_LENALLOC(inode) + pos; @@ -1959,7 +1929,7 @@ } } - if (!memcmp(&UDF_I_LOCATION(inode), bloc, sizeof(lb_addr))) + if (*bh == UDF_I_BH(inode)) { if (!(UDF_I_EXTENDED_FE(inode))) pos = sizeof(struct fileEntry) + UDF_I_LENEATTR(inode); @@ -2111,7 +2081,7 @@ udf_free_blocks(inode->i_sb, inode, nbloc, 0, 1); udf_write_aext(inode, obloc, &oextoffset, eloc, elen, obh, 1); udf_write_aext(inode, obloc, &oextoffset, eloc, elen, obh, 1); - if (!memcmp(&UDF_I_LOCATION(inode), &obloc, sizeof(lb_addr))) + if (obh == UDF_I_BH(inode)) { UDF_I_LENALLOC(inode) -= (adsize * 2); mark_inode_dirty(inode); @@ -2131,7 +2101,7 @@ else { udf_write_aext(inode, obloc, &oextoffset, eloc, elen, obh, 1); - if (!memcmp(&UDF_I_LOCATION(inode), &obloc, sizeof(lb_addr))) + if (obh == UDF_I_BH(inode)) { UDF_I_LENALLOC(inode) -= adsize; mark_inode_dirty(inode); diff -u -r -N ../../linus/main/linux/fs/udf/namei.c linux/fs/udf/namei.c --- ../../linus/main/linux/fs/udf/namei.c Sat Jul 27 19:30:33 2002 +++ linux/fs/udf/namei.c Sun Jul 28 15:14:36 2002 @@ -818,7 +818,10 @@ } if (!(fibh.sbh = fibh.ebh = udf_tread(dir->i_sb, block))) + { + udf_release_data(bh); return 0; + } while ( (f_pos < size) ) { @@ -835,6 +838,9 @@ if (cfi.lengthFileIdent && (cfi.fileCharacteristics & FID_FILE_CHAR_DELETED) == 0) { + if (fibh.sbh != fibh.ebh) + udf_release_data(fibh.ebh); + udf_release_data(fibh.sbh); udf_release_data(bh); return 0; } diff -u -r -N ../../linus/main/linux/fs/udf/super.c linux/fs/udf/super.c --- ../../linus/main/linux/fs/udf/super.c Sat Jul 27 19:30:34 2002 +++ linux/fs/udf/super.c Sun Jul 28 15:14:36 2002 @@ -161,6 +161,7 @@ write_inode: udf_write_inode, put_inode: udf_put_inode, delete_inode: udf_delete_inode, + clear_inode: udf_clear_inode, put_super: udf_put_super, write_super: udf_write_super, statfs: udf_statfs, @@ -383,10 +384,6 @@ UDF_SB(sb)->s_gid = uopt.gid; UDF_SB(sb)->s_umask = uopt.umask; -#if UDFFS_RW != 1 - *flags |= MS_RDONLY; -#endif - if ((*flags & MS_RDONLY) == (sb->s_flags & MS_RDONLY)) return 0; if (*flags & MS_RDONLY) @@ -1428,10 +1425,6 @@ sb->u.generic_sbp = sbi; memset(UDF_SB(sb), 0x00, sizeof(struct udf_sb_info)); -#if UDFFS_RW != 1 - sb->s_flags |= MS_RDONLY; -#endif - if (!udf_parse_options((char *)options, &uopt)) goto error_out; @@ -1543,8 +1536,8 @@ { timestamp ts; udf_time_to_stamp(&ts, UDF_SB_RECORDTIME(sb), 0); - udf_info("UDF %s-%s (%s) Mounting volume '%s', timestamp %04u/%02u/%02u %02u:%02u (%x)\n", - UDFFS_VERSION, UDFFS_RW ? "rw" : "ro", UDFFS_DATE, + udf_info("UDF %s (%s) Mounting volume '%s', timestamp %04u/%02u/%02u %02u:%02u (%x)\n", + UDFFS_VERSION, UDFFS_DATE, UDF_SB_VOLIDENT(sb), ts.year, ts.month, ts.day, ts.hour, ts.minute, ts.typeAndTimezone); } diff -u -r -N ../../linus/main/linux/fs/udf/truncate.c linux/fs/udf/truncate.c --- ../../linus/main/linux/fs/udf/truncate.c Sat Jul 27 19:30:34 2002 +++ linux/fs/udf/truncate.c Sun Jul 28 15:14:36 2002 @@ -95,7 +95,7 @@ else lenalloc = extoffset - adsize; - if (!memcmp(&UDF_I_LOCATION(inode), &bloc, sizeof(lb_addr))) + if (bh == UDF_I_BH(inode)) lenalloc -= udf_file_entry_alloc_offset(inode); else lenalloc -= sizeof(struct allocExtDesc); @@ -108,7 +108,7 @@ extoffset = 0; if (lelen) { - if (!memcmp(&UDF_I_LOCATION(inode), &bloc, sizeof(lb_addr))) + if (bh == UDF_I_BH(inode)) memset(bh->b_data, 0x00, udf_file_entry_alloc_offset(inode)); else memset(bh->b_data, 0x00, sizeof(struct allocExtDesc)); @@ -116,7 +116,7 @@ } else { - if (!memcmp(&UDF_I_LOCATION(inode), &bloc, sizeof(lb_addr))) + if (bh == UDF_I_BH(inode)) { UDF_I_LENALLOC(inode) = lenalloc; mark_inode_dirty(inode); @@ -153,7 +153,7 @@ if (lelen) { - if (!memcmp(&UDF_I_LOCATION(inode), &bloc, sizeof(lb_addr))) + if (bh == UDF_I_BH(inode)) memset(bh->b_data, 0x00, udf_file_entry_alloc_offset(inode)); else memset(bh->b_data, 0x00, sizeof(struct allocExtDesc)); @@ -161,7 +161,7 @@ } else { - if (!memcmp(&UDF_I_LOCATION(inode), &bloc, sizeof(lb_addr))) + if (bh == UDF_I_BH(inode)) { UDF_I_LENALLOC(inode) = lenalloc; mark_inode_dirty(inode); diff -u -r -N ../../linus/main/linux/fs/udf/udf_i.h linux/fs/udf/udf_i.h --- ../../linus/main/linux/fs/udf/udf_i.h Sat Jul 27 19:30:34 2002 +++ linux/fs/udf/udf_i.h Sun Jul 28 15:14:36 2002 @@ -15,12 +15,12 @@ #define UDF_I_ALLOCTYPE(X) ( UDF_I(X)->i_alloc_type ) #define UDF_I_EXTENDED_FE(X) ( UDF_I(X)->i_extended_fe ) #define UDF_I_STRAT4096(X) ( UDF_I(X)->i_strat_4096 ) -#define UDF_I_NEW_INODE(X) ( UDF_I(X)->i_new_inode ) #define UDF_I_NEXT_ALLOC_BLOCK(X) ( UDF_I(X)->i_next_alloc_block ) #define UDF_I_NEXT_ALLOC_GOAL(X) ( UDF_I(X)->i_next_alloc_goal ) #define UDF_I_UMTIME(X) ( UDF_I(X)->i_umtime ) #define UDF_I_UCTIME(X) ( UDF_I(X)->i_uctime ) #define UDF_I_CRTIME(X) ( UDF_I(X)->i_crtime ) #define UDF_I_UCRTIME(X) ( UDF_I(X)->i_ucrtime ) +#define UDF_I_BH(X) ( UDF_I(X)->i_bh ) #endif /* !defined(_LINUX_UDF_I_H) */ diff -u -r -N ../../linus/main/linux/fs/udf/udfdecl.h linux/fs/udf/udfdecl.h --- ../../linus/main/linux/fs/udf/udfdecl.h Sat Jul 27 19:30:34 2002 +++ linux/fs/udf/udfdecl.h Sun Jul 28 15:14:36 2002 @@ -114,6 +114,7 @@ extern void udf_read_inode(struct inode *); extern void udf_put_inode(struct inode *); extern void udf_delete_inode(struct inode *); +extern void udf_clear_inode(struct inode *); extern void udf_write_inode(struct inode *, int); extern long udf_block_map(struct inode *, long); extern int8_t inode_bmap(struct inode *, int, lb_addr *, uint32_t *, lb_addr *, uint32_t *, uint32_t *, struct buffer_head **); diff -u -r -N ../../linus/main/linux/include/linux/udf_fs_i.h linux/include/linux/udf_fs_i.h --- ../../linus/main/linux/include/linux/udf_fs_i.h Sat Jul 27 19:31:55 2002 +++ linux/include/linux/udf_fs_i.h Sun Jul 28 15:14:36 2002 @@ -30,6 +30,7 @@ struct udf_inode_info { + struct buffer_head *i_bh; long i_umtime; long i_uctime; long i_crtime; @@ -45,8 +46,7 @@ unsigned i_alloc_type : 3; unsigned i_extended_fe : 1; unsigned i_strat_4096 : 1; - unsigned i_new_inode : 1; - unsigned reserved : 26; + unsigned reserved : 27; struct inode vfs_inode; }; -- Peter Osterlund - petero2@telia.com http://w1.894.telia.com/~u89404340
On Sun, Jul 28, 2002 at 03:39:36PM +0200, Peter Osterlund wrote:
On Fri, 19 Jul 2002, Peter Osterlund wrote:
On Fri, 19 Jul 2002, Ben Fennema wrote:
On Fri, Jul 19, 2002 at 01:33:13PM +0200, Peter Osterlund wrote:
On Thu, 18 Jul 2002, Ben Fennema wrote:
On Thu, Jul 18, 2002 at 11:31:21PM +0200, Peter Osterlund wrote:
I still think that's true, but I have some more information on the udf filesystem behavior. If I create a new udf filesystem and start adding lots of small files to it, at first (before dirty data writeback starts) the speed at which files are added is limited by how fast data can be *read* from the CD. I haven't looked at the udf code yet, but I would guess the udf filesystem reads a disk block for each file being added.
Hmm, it could be an issue with data being embedded in the inode. The data would go through the page cache, but the inode would go through the buffer cache, and this could force a read or something funky like that.
Ok, here's a patch which should fix the problem.
Let me know if it shows a noticeable improvement (or if you can't get it to apply nicely to whatever kernel version your using) =]
Yes it works perfectly. I applied it to the 2.4 cvs tree then copied it into the 2.4.19-rc2 tree.
I needed this patch for 2.5.29 too so I made a forward port. It appears to work fine and gives a big performance increase also in 2.5. Can you please have a look at the patch and maybe submit it to Linus if you think it's OK? Thanks.
I've actually been working on a "better" =] patch. The one I sent out was kind of a quick and dirty fix. I should be checking it into the UDF cvs tree soon =) Ben
Hi, Am Fre, 2002-07-19 um 19.18 schrieb Ben Fennema:
On Fri, Jul 19, 2002 at 01:33:13PM +0200, Peter Osterlund wrote:
On Thu, 18 Jul 2002, Ben Fennema wrote:
On Thu, Jul 18, 2002 at 11:31:21PM +0200, Peter Osterlund wrote:
I still think that's true, but I have some more information on the udf filesystem behavior. If I create a new udf filesystem and start adding lots of small files to it, at first (before dirty data writeback starts) the speed at which files are added is limited by how fast data can be *read* from the CD. I haven't looked at the udf code yet, but I would guess the udf filesystem reads a disk block for each file being added.
Hmm, it could be an issue with data being embedded in the inode. The data would go through the page cache, but the inode would go through the buffer cache, and this could force a read or something funky like that.
Mounting with -o noadinicb turns off this behavior. You could try it and see if it eliminates the extra reads.
There are still reads. I'm not sure if it's exactly the same amount of reads, but I didn't notice any difference in performance.
(But maybe you already know what the problem is. I got that impression from your other mail.)
Ok, here's a patch which should fix the problem.
Let me know if it shows a noticeable improvement (or if you can't get it to apply nicely to whatever kernel version your using) =]
It's against the 2.4 cvs tree, and prolly only works on 2.4.18+ (since all the version checks got ripped out of the code, which is actually makes the patch as large as it is)
does anybody have Ben's patch for a 2.4.19-rc2 kernel. I have difficulties in applying Ben's patch to my kernel. I would like to test this patch, because i have the same problems (2k of 0s in files copied to cdrw and ultra low speed). Thanks Manfred
On 21 Jul 2002, Manfred Kreisl wrote:
does anybody have Ben's patch for a 2.4.19-rc2 kernel. I have difficulties in applying Ben's patch to my kernel. I would like to test this patch, because i have the same problems (2k of 0s in files copied to cdrw and ultra low speed).
Here is the patch I use. This is exactly equivalent to applying Ben's patch the the 2.4 cvs tree, then copying the result into the kernel tree. -- Peter Osterlund - petero2@telia.com http://w1.894.telia.com/~u89404340
Hello Peter, Am Mon, 2002-07-22 um 02.40 schrieb Peter Osterlund:
On 21 Jul 2002, Manfred Kreisl wrote:
does anybody have Ben's patch for a 2.4.19-rc2 kernel. I have difficulties in applying Ben's patch to my kernel. I would like to test this patch, because i have the same problems (2k of 0s in files copied to cdrw and ultra low speed).
Here is the patch I use. This is exactly equivalent to applying Ben's patch the the 2.4 cvs tree, then copying the result into the kernel tree.
thanks for the patch. I immediately patched my kernel without any problems and made some tests. Here are the results: - copying the kernel sources is quite a little bit faster than without Bens patch, - the 2k errors are still there, but what I saw is: with a new initialized CD there are always 2k 0s at the beginning of the file, however with a used cd (data was already on this disk), 2k of garbage data is at the beginning of the file, never 0s. So it seems, that in case of an error the first block of a file is not written to disk. - erasing the kernel tree on CD takes more time than writing (approx 20MByte in 10 minutes, 1 file per second). This is really terrible. Manfred
Manfred Kreisl wrote:
Hello Peter,
Am Mon, 2002-07-22 um 02.40 schrieb Peter Osterlund:
On 21 Jul 2002, Manfred Kreisl wrote:
does anybody have Ben's patch for a 2.4.19-rc2 kernel. I have difficulties in applying Ben's patch to my kernel. I would like to test this patch, because i have the same problems (2k of 0s in files copied to cdrw and ultra low speed).
Here is the patch I use. This is exactly equivalent to applying Ben's patch the the 2.4 cvs tree, then copying the result into the kernel tree.
thanks for the patch.
I immediately patched my kernel without any problems and made some tests.
Here are the results:
- copying the kernel sources is quite a little bit faster than without Bens patch,
- the 2k errors are still there, but what I saw is: with a new initialized CD there are always 2k 0s at the beginning of the file, however with a used cd (data was already on this disk), 2k of garbage data is at the beginning of the file, never 0s. So it seems, that in case of an error the first block of a file is not written to disk.
- erasing the kernel tree on CD takes more time than writing (approx 20MByte in 10 minutes, 1 file per second). This is really terrible.
Manfred
are you doing this on CD-RW or DVD+RW? gustavo
Hi Gustavo, Am Mit, 2002-07-24 um 00.27 schrieb guStaVo ZaeRa:
thanks for the patch.
I immediately patched my kernel without any problems and made some tests.
Here are the results:
- copying the kernel sources is quite a little bit faster than without Bens patch,
- the 2k errors are still there, but what I saw is: with a new initialized CD there are always 2k 0s at the beginning of the file, however with a used cd (data was already on this disk), 2k of garbage data is at the beginning of the file, never 0s. So it seems, that in case of an error the first block of a file is not written to disk.
- erasing the kernel tree on CD takes more time than writing (approx 20MByte in 10 minutes, 1 file per second). This is really terrible.
Manfred
are you doing this on CD-RW or DVD+RW?
CD-RW Manfred
On Thu, Jul 18, 2002 at 11:31:21PM +0200, Peter Osterlund wrote:
On Wed, 17 Jul 2002, Sergiy Kudryk wrote:
Is it possible to exclude mixed write/read operations with pktcdvd ?
For example when i move multiple small files from CDRW disk to harddrive mixed read /write transactions occurs. This degrades overall performance and potentially can destroy laser head of CDRW drive.
Can be in pktcdvd code implemented mechanism similar to semaphore: stop write transactions when we have read one and contra ?
This is what I wrote in an earlier mail:
I think the speed issue is caused by two things:
The udf filesystem seems to be inefficient at handling many small files. I don't know if that's caused by the current implementation or by something in the udf specification that requires such behavior.
The pktcdvd module is bypassing the the I/O elevator when creating write requests for the CDRW drive. This can make performance really suffer when there is a mixed read/write load. The 2.5 version of pktcdvd has fixed this problem, but a backport is not easy because it relies heavily on the new bio infrastructure in 2.5.
I still think that's true, but I have some more information on the udf filesystem behavior. If I create a new udf filesystem and start adding lots of small files to it, at first (before dirty data writeback starts) the speed at which files are added is limited by how fast data can be *read* from the CD. I haven't looked at the udf code yet, but I would guess the udf filesystem reads a disk block for each file being added.
yup, yup, yup. adding data to a file causes a read of the inode, even for new files.. bah, I guess I should fix that =] Ben
participants (8)
-
Ben Fennema
-
Chris Clayton
-
guStaVo ZaeRa
-
Manfred Kreisl
-
Peter Osterlund
-
Rene Bartsch
-
Sergiy Kudryk
-
Wayde Milas