On 2014-10-12 14:02, Otto Rodusek wrote:
On 12/10/14 10:26, Carlos E. R. wrote:
Hi Carlos,
Thanks for your feedback. Yes I've used both dd_rescue and ddrescue (different programs) and I have found that ddrescue is the better product.
If you compare dd_rescue and ddrescue, yes. But the trick is to use tha wrapper script dd_rhelp instead. It does those adjustments automatically, on the fly. I think it calls and perhaps kills dd_rescue as needed, many times.
Greg pointed me in the correct direction with the parameters (--skip-size and -N) which more or less resolved the issues.
I would still be curios as to where/what causes the bottleneck - yes I do know that when an error sector is found the system tries (many times) to recover the sector with multiple reads - this is the part that I was hoping to resolve -
And it can be many sectors, nor only one. Also, with many faulty sectors I understand the head has problems to locate the tracks. It needs reading tracks to know where it is. And each failed reading causes a delay: if I recall correctly, the head is reinitialized on each failed attempt.
whether there is a parameter or command to ignore the error sector and don't waste time on recovery when I know it's bad/dead. If it's a function of the drive firmware then there's no hope I think, but if it's the linux kernel or a linux driver that is causing the re-reads than maybe there is a parameter to simply not retry??
It is both. Both have a retry count. You control only the kernel side.
For example if I ( cat anyfile-with-a-bad-sector(s) ) it could potentially take a LONG time to display. I know it's not the "cat" command that is doing the retries on bad sectors. I was hoping that it was a kernel parameter that can be tweaked to make this re-read ignored.
Yes, these rescue tools adjust that count. -- Cheers / Saludos, Carlos E. R. (from 13.1 x86_64 "Bottle" at Telcontar)