Andy Polyakov wrote:
Well, if media turned rotten there is not much one can do for now, right? BTW, what do you mean by "re-formatting"? 'dvd+rw-format -f ...' or 'mkudffs ...' alone? It was observed [by several people now] that excessive reformats, i.e. with 'dvd+rw-format -f ...' *or* equivalent windows programs, render DVD+RW media unusable already after 10-20-30 reformats. So that you should run dvd+rw-format only once and then stick to mkudffs (or growisofs for that matter:-).
no, i'm only using mudffs. :) i put the dvd 200i drive back in, and had no problems copying a 2GB file onto it. Not even one error message! :D
You have to understand that all filesystem I/O passes common buffer cache and working with large filesets might put quite a pressure on VM subsystem which might start suffering from excessive page scans. Under such conditions delays can be more than just "noticeable." Another user has reported about 4 times better growisofs performance after he bound his /dev/scdN to /dev/raw/rawM device and bypassed the buffer cache. Of course in this case /dev/raw/rawM is not an option and you might have to learn to accept and live with apparently suboptimal writing rates:-)
ok... i can live with that, but would there be a way of speeding up the VM? :)
i can see that in get_capabilities() in sr.c, the speed is set to 1 if the capabilites were not read from the drive. Otherwise the speed is set to :
scsi_CDs[i].cdi.speed = ((buffer[n + 8] << 8) + buffer[n + 9]) / 176;
in my case that is 32/32, but the writer cannot write at 32! is there any way i can find out where to look for the write speed?
As far as I undertand you can't control DVD writing speed. The speed settings you mention are applicable only when CD media is mounted. Well, Ricoh might have implementd DVD speed vendor-specific control, but I wouldn't know, never seen any vendor documentation:-( But as it is the unit always writes as fast as it can. What you experience is rather VM performance problem than unit's "fault." Well, write request segmentation might affect unit's performance. What I mean is that when you say "write 2KB," the unit has to read 32KB, replace corresponding 2KB and then write out resulting 32KB. Now when you copy large file write requests are of course larger than 2KB, but they are not necessarily aligned at 32KB, therefore requests effectively get fragmented anyway. Some caching algorithm is of course in place, but obviously it's not 100% effective as the unit does ask for "SYNCHRONIZE CACHE" now and then.
ah! ok, that clears up many things! thanks a lot, andy! :] gustavo