On 25/12/2021 15.16, Carlos E. R. wrote:
Hi,
I use this bash code (script) to image partitions - I arrived there with help here:
Testing on my "powerful" desktop machine. Current version: mkfifo mdpipe dd if=/dev/$1 status=progress bs=16M | tee mdpipe | \ zstd --size-hint=$4 -$5 > $3.zst & md5sum -b mdpipe | tee -a md5checksum_expanded wait rm mdpipe $1 $2 $3 $4 $5 nvme0n1p2 "23m" "nvme0n1p2__nvme-swap" 100G 4 nvme0n1p5 "16m" "nvme0n1p5__nvme-main" 150G 3 Write speed was: 107374182400 bytes (107 GB, 100 GiB) copied, 1106.29 s, 97.1 MB/s 161059176448 bytes (161 GB, 150 GiB) copied, 705.491 s, 228 MB/s Then I did a comparison using dd alone, no compression; I got these speeds: 107374182400 bytes (107 GB, 100 GiB) copied, 317.29 s, 338 MB/s 161059176448 bytes (161 GB, 150 GiB) copied, 265.229 s, 607 MB/s Now, notice the much faster write speed on the destination without using zstd compression (writing to rotating rust). This is a first. The differences are a powerful CPU (AMD Ryzen 5 3600X 6-Core Processor), on Leap 15.3 (previous testing were old laptops and Leap 15.2). Source disk is M.2 nvme "disk", destination is rotating rust over USB3, running LUKS encrypted and compressed btrfs partition on 15.3. mount output: /dev/mapper/cr_backup on /backup type btrfs (rw,relatime,compress=zlib:3,space_cache,subvolid=5,subvol=/) Previous testing were Leap 15.2, SSD source, destination USB2/USB3, running LUKS encrypted and compressed btrfs partition on 15.2. Obviously, that I get a constant write speed above 150MB/S (the hardware maximum for rotating rust) has to be due to the effective btrfs compression, something that did not happen on my other machines. Although: Erebor4:~ # hdparm -tT /dev/sdb4 /dev/sdb4: Timing cached reads: 27872 MB in 2.00 seconds = 13952.21 MB/sec Timing buffered disk reads: 604 MB in 3.00 seconds = 201.21 MB/sec Erebor4:~ # Compression ratios obtained: Erebor4:/backup/images/001 # compsize nvme0n1p2__nvme-swap.img Processed 1 file, 609160 regular extents (609160 refs), 0 inline. Type Perc Disk Usage Uncompressed Referenced TOTAL 47% 47G 100G 100G none 100% 31G 31G 31G zlib 23% 16G 68G 68G Erebor4:/backup/images/001 # compsize nvme0n1p2__nvme-swap.zst Processed 1 file, 311 regular extents (311 refs), 0 inline. Type Perc Disk Usage Uncompressed Referenced TOTAL 100% 34G 34G 34G none 100% 34G 34G 34G Erebor4:/backup/images/001 # Erebor4:/backup/images/001 # compsize nvme0n1p5__nvme-main.img Processed 1 file, 1025001 regular extents (1025001 refs), 0 inline. Type Perc Disk Usage Uncompressed Referenced TOTAL 26% 40G 149G 149G none 100% 27G 27G 27G zlib 10% 12G 122G 122G Erebor4:/backup/images/001 # compsize nvme0n1p5__nvme-main.zst Processed 1 file, 4342 regular extents (4342 refs), 0 inline. Type Perc Disk Usage Uncompressed Referenced TOTAL 99% 33G 33G 33G none 100% 33G 33G 33G zlib 14% 11M 74M 74M Erebor4:/backup/images/001 # man says: The fields above are: Type compression algorithm Perc disk usage/uncompressed (compression ratio) Disk Usage blocks on the disk; this is what storing these files actually costs you (save for RAID considerations) Uncompressed uncompressed extents; what you would need without compression - includes deduplication savings and pinned extent waste Referenced apparent file sizes (sans holes); this is what a traditional filesystem that supports holes and efficient tail packing, or tar -S, would need to store these files What it does not explain are the TOTAL, none, zlib rows. I think we have to look at the "TOTAL" rows. Compression is less than what "zstd 3" gets, but if the goal is speed, then it is better without zstd (which also facilitates recovery). At least on a powerful computer running 15.3. So, the results can vary a lot per machine. Maybe the kernel varies the btrfs compression effort depending on CPU? -- Cheers / Saludos, Carlos E. R. (from oS Leap 15.3 x86_64 (Erebor-4))