Carlos E. R. wrote:
On 30/12/2021 19.49, Per Jessen wrote:
Carlos E. R. wrote:
On 30/12/2021 14.51, Per Jessen wrote:
Carlos E. R. wrote:
(the current directory is on the destination disk, external rotating rust via USB2)
This works fine.
Now, I noticed something.
On gkrellmn I observe, as the script runs, that the reads on the source disk (ssd, in this case) and the writes to the destination disk alternate, are not simultaneous. And it took 5 hours to image perhaps 300gigs; of course, the destination is on USB2, and the CPU is old, but that alternation doesn't help.
Could that be improved somehow?
I am assuming you have more than one core for running this on ?
Yesss :-D
Okay :-)
I was just wondering if you had too many processes competing for CPU time, essentially causing a serialisation.
Ah, ok, I see your thought.
No, I run this from a dedicated external hard disk, with an XFCE graphical system used for rescue/backup/restore operations. There is nothing installed or running on it, no email, browser, etc. Ok, the programs may be there, but never started. As I routinely run a terminal with top and atop, I would notice if something is using resources.
No, never mind everything else. Your script causes a number of inter-dependent processes. Every time you pipe something, for instance. Your pigz is also parallelized.
Still, my laptop would do it faster if there wasn't that alternation between read and write operations (writing via USB2).
For some reason your read process is not reading ahead even if it has plenty of time to fill up some buffers while the slow write IO is taking place.
Right. Or no buffering of the read. It fills that 16M of the buffer in dd, and then sends it forward to the pipe. A better processing would start reading another 16M buffer.
No doubt it does - unless it cannot get rid of the buffer.
So, I wonder if there is something that could be done.
I'm sure there is, but your writing to USB will remain your bottleneck. -- Per Jessen, Zürich (12.9°C)