[opensuse] Compressed filesystems?

Hi, What filesystems do we have with transparent compression? Read/write, of course. I intend to create a compressed backup external disk (Leap 15.0) I know of btrfs and zfs. zfs is not supported officially on openSUSE, that's a con. I read on some sites not to use zfs on single drives (<https://www.ixsystems.com/community/threads/single-drive-zfs.35515/#post-216140>) «Well, the CTO of iXsystems said something like "single disk ZFS is so pointless it's actually worse than not using ZFS". Technically you can do deduplication and compression. But there is no protection from corruption since there is no redundancy. So any error can be detected, but cannot be corrected. This sounds like an acceptable compromise, but its actually not. The reason its not is that ZFS' metadata cannot be allowed to be corrupted. If it is it is likely the zpool will be impossible to mount (and will probably crash the system once the corruption is found). So a couple of bad sectors in the right place will mean that all data on the zpool will be lost. Not some, all. Also there's no ZFS recovery tools, so you cannot recover any data on the drives. You cannot use the standard recovery tools that are designed for NTFS, FAT32, etc either. They don't work correctly.» Other alternatives? btrfs seems to support it well. <https://btrfs.wiki.kernel.org/index.php/Compression> Then, what compression method to use? Default is zlib, says the wiki. «There's a speed/ratio trade-off: ZLIB -- slower, higher compression ratio (uses zlib level 3 setting, you can see the zlib level difference between 1 and 6 in zlib sources). LZO -- faster compression and decompression than zlib, worse compression ratio, designed to be fast ZSTD -- (since v4.14) compression comparable to zlib with higher compression/decompression speeds and different ratio levels (details) The differences depend on the actual data set and cannot be expressed by a single number or recommendation. Do your own benchmarks. LZO seems to give satisfying results for general use.» That seems to point to ZSTD. The wiki says there are 15 compression levels with ZSTD, but does not say which is the default or describe the expectations for each level. Points to a "details" link. Maybe level 1 would do. I do not want really high compression ratio, I prefer speed because backups are large. And things like email compress very well, anyway. Ideas? -- Cheers / Saludos, Carlos E. R. (from 15.0 x86_64 at Telcontar)

Carlos E. R. wrote:
Ideas?
For backups, what is wrong with "tar cJf " ? -- Per Jessen, Zürich (18.8°C) http://www.hostsuisse.com/ - virtual servers, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org

Mathias Homann wrote:
On 31.07.19 21:00, Per Jessen wrote:
Carlos E. R. wrote:
Ideas? For backups, what is wrong with "tar cJf " ?
...there is not a single buzzword in it?
Of course, you're right, mea culpa. :-) -- Per Jessen, Zürich (19.2°C) http://www.dns24.ch/ - your free DNS host, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 El 2019-07-31 a las 21:00 +0200, Per Jessen escribió: Welcome back! :-D
Carlos E. R. wrote:
Ideas?
For backups, what is wrong with "tar cJf " ?
Eight terabytes of it? Aside not being comfortable for creation and later access of a single file, it is not reliable: a single bit error and the whole thing is unrecoverable, because it can not be decompressed. RAR would be acceptable, but it does not fully support Linux permission and attribute system. Frankly, rsync is far easier to manage. Access is transparent, and the heavy job is passed on to another machine, where the disk is connected. I have already created the btrfs filesystem, now I have a problem with selecting the compression method. It is on another post. - -- Cheers Carlos E. R. (from openSUSE 15.0 (Legolas)) -----BEGIN PGP SIGNATURE----- iHoEARECADoWIQQZEb51mJKK1KpcU/W1MxgcbY1H1QUCXUH1VBwccm9iaW4ubGlz dGFzQHRlbGVmb25pY2EubmV0AAoJELUzGBxtjUfVb0AAn2WQqsbTbFIIecb10I+X zrEmt+D1AJ9fDyYGSE9PXBzMb7G9fR22qT62gw== =oFaK -----END PGP SIGNATURE-----

Carlos E. R. wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
El 2019-07-31 a las 21:00 +0200, Per Jessen escribió:
Welcome back! :-D
Carlos E. R. wrote:
Ideas?
For backups, what is wrong with "tar cJf " ?
Eight terabytes of it?
Sure, if time allows.
Aside not being comfortable for creation and later access of a single file, it is not reliable: a single bit error and the whole thing is unrecoverable, because it can not be decompressed.
RAR would be acceptable, but it does not fully support Linux permission and attribute system.
Frankly, rsync is far easier to manage. Access is transparent, and the heavy job is passed on to another machine, where the disk is connected.
I agree, but it all depends on your requirements (which you didn't say much about). For a plain daily backup, tar has been doing a fine job for decades, we use it several times a day, for all kinds of stuff. For larger volumes of data, e.g. your 8 Tb, we don't have the time to do full backups, instead we keep two copies. Some with drbd, some with regular rsync. A compressed filesystem with read/write access - personally I don't really see the need. We use cromfs for archives, read-only, but I don't think cromfs would be efficient for terabytes or more. -- Per Jessen, Zürich (17.4°C) http://www.hostsuisse.com/ - virtual servers, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org

On 2019-08-01 02:52 AM, Per Jessen wrote:
For backups, what is wrong with "tar cJf " ? Eight terabytes of it? Sure, if time allows.
And you have enough floppies! ;-) -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org

On 01/08/2019 08.52, Per Jessen wrote:
Carlos E. R. wrote:
El 2019-07-31 a las 21:00 +0200, Per Jessen escribió:
Welcome back! :-D
Carlos E. R. wrote:
Ideas?
For backups, what is wrong with "tar cJf " ?
Eight terabytes of it?
Sure, if time allows.
Aside not being comfortable for creation and later access of a single file, it is not reliable: a single bit error and the whole thing is unrecoverable, because it can not be decompressed.
RAR would be acceptable, but it does not fully support Linux permission and attribute system.
Frankly, rsync is far easier to manage. Access is transparent, and the heavy job is passed on to another machine, where the disk is connected.
I agree, but it all depends on your requirements (which you didn't say much about).
A compressed filesystem - that's my requirement :-) I would prefer ext4, which has the compressed flag since version 2 or so, but has never been implemented.
For a plain daily backup, tar has been doing a fine job for decades, we use it several times a day, for all kinds of stuff.
For larger volumes of data, e.g. your 8 Tb, we don't have the time to do full backups, instead we keep two copies. Some with drbd, some with regular rsync.
Well, that's it. I lost a 3TB disk recently and suddenly, no warning from smart. Even if the data is not crucial and I have partial backups, which I'm on the process of recovering, it is a loss.
A compressed filesystem with read/write access - personally I don't really see the need. We use cromfs for archives, read-only, but I don't think cromfs would be efficient for terabytes or more.
If you use tape, you have compression in hardware. I don't have tape. It is reasonable to compress (even at level 1, which would be my choice) files like email folders. Not compressing them seems a waste to me. The backup is proceeding now at 38.71M bytes/sec, over the network. One core almost 100% busy at receiving side. -- Cheers / Saludos, Carlos E. R. (from 15.0 x86_64 at Telcontar)

Carlos E. R. wrote:
For a plain daily backup, tar has been doing a fine job for decades, we use it several times a day, for all kinds of stuff.
For larger volumes of data, e.g. your 8 Tb, we don't have the time to do full backups, instead we keep two copies. Some with drbd, some with regular rsync.
Well, that's it.
I lost a 3TB disk recently and suddenly, no warning from smart.
Yes, those Western Digital Greens, they do just die. Did you run daily tests on it?
Even if the data is not crucial and I have partial backups, which I'm on the process of recovering, it is a loss.
Agree.
A compressed filesystem with read/write access - personally I don't really see the need. We use cromfs for archives, read-only, but I don't think cromfs would be efficient for terabytes or more.
If you use tape, you have compression in hardware. I don't have tape.
It is not important. xz and parallel compressions work faster & better than any typical LTO hardware.
It is reasonable to compress (even at level 1, which would be my choice) files like email folders. Not compressing them seems a waste to me.
Does not matter either - space is cheap. With huge data volumes it matters somewhat when you look at the time it takes, but you still have to get the data off the disk and on to another (or tape).
The backup is proceeding now at 38.71M bytes/sec, over the network. One core almost 100% busy at receiving side.
38Mbyte/sec is a bit slow isn't it? -- Per Jessen, Zürich (26.3°C) http://www.cloudsuisse.com/ - your owncloud, hosted in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org

On 01/08/2019 18.30, Per Jessen wrote:
Carlos E. R. wrote:
For a plain daily backup, tar has been doing a fine job for decades, we use it several times a day, for all kinds of stuff.
For larger volumes of data, e.g. your 8 Tb, we don't have the time to do full backups, instead we keep two copies. Some with drbd, some with regular rsync.
Well, that's it.
I lost a 3TB disk recently and suddenly, no warning from smart.
Yes, those Western Digital Greens, they do just die.
Seagate Barracuda, as most of mine.
Did you run daily tests on it?
Certainly, the quick test. See "Dead disk" mail on the past 06/06 <3.6> 2019-06-04 09:38:37 Telcontar smartd 1484 - - Device: /dev/sda [SAT], SMART Usage Attribute: 190 Airflow_Temperature_Cel changed from 65 to 67 <3.6> 2019-06-04 09:38:37 Telcontar smartd 1484 - - Device: /dev/sda [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 35 to 33 <3.6> 2019-06-04 09:38:37 Telcontar smartd 1484 - - Device: /dev/sda [SAT], old test of type S not run at Tue Jun 4 03:00:00 2019 CEST, starting now. <3.6> 2019-06-04 09:38:37 Telcontar smartd 1484 - - Device: /dev/sda [SAT], starting scheduled Short Self-Test. ... 3.6> 2019-06-04 09:47:19 Telcontar smartd 1484 - - Device: /dev/sda [SAT], SMART Usage Attribute: 190 Airflow_Temperature_Cel changed from 67 to 66 <3.6> 2019-06-04 09:47:19 Telcontar smartd 1484 - - Device: /dev/sda [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 33 to 34 <3.6> 2019-06-04 09:47:19 Telcontar smartd 1484 - - Device: /dev/sda [SAT], previous self-test completed without error ... and the next reference to sda is <0.5> 2019-06-05 11:17:34 Telcontar kernel - - - [46314.994158] sd 8:0:0:0: [sda] Starting disk <0.3> 2019-06-05 11:24:06 Telcontar kernel - - - [46712.886094] ata8.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen <0.3> 2019-06-05 11:24:06 Telcontar kernel - - - [46712.886098] ata8.00: failed command: SMART ... <3.6> 2019-06-05 11:35:37 Telcontar smartd 1484 - - Device: /dev/sda [SAT], SMART Usage Attribute: 190 Airflow_Temperature_Cel changed from 65 to 69 <3.6> 2019-06-05 11:35:37 Telcontar smartd 1484 - - Device: /dev/sda [SAT], SMART Usage Attribute: 194 Temperature_Celsius changed from 35 to 31 <3.6> 2019-06-05 11:35:38 Telcontar smartd 1484 - - Device: /dev/sda [SAT], old test of type S not run at Wed Jun 5 03:00:00 2019 CEST, starting now. <3.6> 2019-06-05 11:35:38 Telcontar smartd 1484 - - Device: /dev/sda [SAT], starting scheduled Short Self-Test. ... <0.3> 2019-06-05 11:36:13 Telcontar kernel - - - [47440.730336] ata8.00: exception Emask 0x0 SAct 0x600 SErr 0x0 action 0x6 frozen <0.3> 2019-06-05 11:36:13 Telcontar kernel - - - [47440.730340] ata8.00: failed command: READ FPDMA QUEUED <0.3> 2019-06-05 11:36:13 Telcontar kernel - - - [47440.730346] ata8.00: cmd 60/00:48:80:27:27/08:00:01:00:00/40 tag 9 ncq dma 1048576 in <0.3> 2019-06-05 11:36:13 Telcontar kernel - - - [47440.730346] res 40/00:ff:00:00:00/00:00:00:00:00/40 Emask 0x4 (timeout) <0.3> 2019-06-05 11:36:13 Telcontar kernel - - - [47440.730348] ata8.00: status: { DRDY } <0.3> 2019-06-05 11:36:13 Telcontar kernel - - - [47440.730350] ata8.00: failed command: READ FPDMA QUEUED <0.3> 2019-06-05 11:36:13 Telcontar kernel - - - [47440.730356] ata8.00: cmd 60/00:50:80:2f:27/08:00:01:00:00/40 tag 10 ncq dma 1048576 in <0.3> 2019-06-05 11:36:13 Telcontar kernel - - - [47440.730356] res 40/00:00:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) <0.3> 2019-06-05 11:36:13 Telcontar kernel - - - [47440.730357] ata8.00: status: { DRDY } <0.6> 2019-06-05 11:36:13 Telcontar kernel - - - [47440.730360] ata8: hard resetting link <0.6> 2019-06-05 11:36:19 Telcontar kernel - - - [47446.070550] ata8: SATA link up 1.5 Gbps (SStatus 113 SControl 310) <0.6> 2019-06-05 11:36:19 Telcontar kernel - - - [47446.088738] ata8.00: configured for UDMA/133 <0.4> 2019-06-05 11:36:19 Telcontar kernel - - - [47446.088744] ata8.00: device reported invalid CHS sector 0 <0.6> 2019-06-05 11:36:19 Telcontar kernel - - - [47446.088750] ata8: EH complete <0.3> 2019-06-05 11:37:50 Telcontar kernel - - - [47537.718107] ata8.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen <0.3> 2019-06-05 11:37:50 Telcontar kernel - - - [47537.718111] ata8.00: failed command: SMART <0.3> 2019-06-05 11:37:50 Telcontar kernel - - - [47537.718118] ata8.00: cmd b0/d0:01:00:4f:c2/00:00:00:00:00/00 tag 9 pio 512 in <0.3> 2019-06-05 11:37:50 Telcontar kernel - - - [47537.718118] res 40/00:ff:00:00:00/00:00:00:00:00/40 Emask 0x4 (timeout) <0.3> 2019-06-05 11:37:50 Telcontar kernel - - - [47537.718119] ata8.00: status: { DRDY } <0.6> 2019-06-05 11:37:50 Telcontar kernel - - - [47537.718123] ata8: hard resetting link And thousands of error lines more. When I managed to call the smartctl command: SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 115 099 006 Pre-fail Always - 87743464 3 Spin_Up_Time 0x0003 095 093 000 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 098 098 020 Old_age Always - 2401 5 Reallocated_Sector_Ct 0x0033 089 089 010 Pre-fail Always - 13904 7 Seek_Error_Rate 0x000f 058 058 030 Pre-fail Always - 124577919351 9 Power_On_Hours 0x0032 078 078 000 Old_age Always - 19447 10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 098 098 020 Old_age Always - 2324 183 Runtime_Bad_Block 0x0032 098 098 000 Old_age Always - 2 184 End-to-End_Error 0x0032 100 100 099 Old_age Always - 0 187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 0 188 Command_Timeout 0x0032 071 071 000 Old_age Always - 28 30 30 189 High_Fly_Writes 0x003a 100 100 000 Old_age Always - 0 190 Airflow_Temperature_Cel 0x0022 066 056 045 Old_age Always - 34 (Min/Max 34/36) 191 G-Sense_Error_Rate 0x0032 100 100 000 Old_age Always - 0 192 Power-Off_Retract_Count 0x0032 100 100 000 Old_age Always - 93 193 Load_Cycle_Count 0x0032 007 007 000 Old_age Always - 186842 194 Temperature_Celsius 0x0022 034 044 000 Old_age Always - 34 (0 13 0 0 0) 197 Current_Pending_Sector 0x0012 001 001 000 Old_age Always - 21344 198 Offline_Uncorrectable 0x0010 001 001 000 Old_age Offline - 21344 199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 4 240 Head_Flying_Hours 0x0000 100 253 000 Old_age Offline - 5309h+12m+33.571s 241 Total_LBAs_Written 0x0000 100 253 000 Old_age Offline - 90692687964 242 Total_LBAs_Read 0x0000 100 253 000 Old_age Offline - 20862873710
Even if the data is not crucial and I have partial backups, which I'm on the process of recovering, it is a loss.
Agree.
A compressed filesystem with read/write access - personally I don't really see the need. We use cromfs for archives, read-only, but I don't think cromfs would be efficient for terabytes or more.
If you use tape, you have compression in hardware. I don't have tape.
It is not important. xz and parallel compressions work faster & better than any typical LTO hardware.
It is reasonable to compress (even at level 1, which would be my choice) files like email folders. Not compressing them seems a waste to me.
Does not matter either - space is cheap. With huge data volumes it matters somewhat when you look at the time it takes, but you still have to get the data off the disk and on to another (or tape).
At 8TB sizes disks are no longer cheap, for home use. At a certain range of sizes the price per megabyte goes low, then it increases.
The backup is proceeding now at 38.71M bytes/sec, over the network. One core almost 100% busy at receiving side.
38Mbyte/sec is a bit slow isn't it?
Well, the other side is a tiny computer in half a shoe box. That's why I wanted to change the compression level to "1". Is first compressed, then encrypted with LUKS. -- Cheers / Saludos, Carlos E. R. (from 15.0 x86_64 at Telcontar)

Carlos E. R. wrote:
It is reasonable to compress (even at level 1, which would be my choice) files like email folders. Not compressing them seems a waste to me.
Does not matter either - space is cheap. With huge data volumes it matters somewhat when you look at the time it takes, but you still have to get the data off the disk and on to another (or tape).
At 8TB sizes disks are no longer cheap, for home use.
Space is cheaper than it ever was. If you use a lot, well, the total goes up of course. The cheapest I see is SFr224 for an 8Tb Seagate.
The backup is proceeding now at 38.71M bytes/sec, over the network. One core almost 100% busy at receiving side.
38Mbyte/sec is a bit slow isn't it?
Well, the other side is a tiny computer in half a shoe box. That's why I wanted to change the compression level to "1". Is first compressed, then encrypted with LUKS.
I guess it's a trade-off, but for a backup, I would not want to loose 50% of my bandwidth to compression and encryption. Anyway, if you're worried about space, maybe use cromfs? You can't rewrite the volumes, but you have random-access to the individual files. Probably will take quite some time with really large volumes. -- Per Jessen, Zürich (19.1°C) http://www.dns24.ch/ - your free DNS host, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Content-ID: <alpine.LSU.2.21.1908021001380.5907@Telcontar.valinor> On Friday, 2019-08-02 at 09:47 +0200, Per Jessen wrote:
Carlos E. R. wrote:
It is reasonable to compress (even at level 1, which would be my choice) files like email folders. Not compressing them seems a waste to me.
Does not matter either - space is cheap. With huge data volumes it matters somewhat when you look at the time it takes, but you still have to get the data off the disk and on to another (or tape).
At 8TB sizes disks are no longer cheap, for home use.
Space is cheaper than it ever was. If you use a lot, well, the total goes up of course. The cheapest I see is SFr224 for an 8Tb Seagate.
WD My Book 10TB 3.5" USB 3.0 Black 257,80€ --> 25.7€/TB WD My Book Essential 8TB 3.5" USB 3.0 Black 190,80€ (23,85€/TB) --> sweet price per terabyte. (then I found the My Book on Amazon for 181,82€) My Book Essential 6TB 3.5" USB 3.0 Black 159,34€ --> 26.55 €/TB My Book Essential 4TB 3.5" USB 3.0 Black 142,19€ --> 35.54€/TB Seagate Backup Plus Hub 10TB 3.5" USB 3.0 307,80€ --> 30.7€/TB Seagate Backup Plus Hub 8TB 3.5" USB 3.0 216,80€ (27,1€/TB) Seagate Expansion 2.5" 4TB USB 3.0 119,99€ --> 30€/TB Internal hard disks are even more expensive. Normally it is (was?) the reverse. WD Blue 6TB 3.5" SATA 3 202,04€ --> 33,67€/TB Seagate BarraCuda Pro 3.5" 6TB SATA3 292,21€ --> 48,70€/TB Seagate BarraCuda 3.5" 4TB SATA3 108,59€ --> 27.14€/TB
The backup is proceeding now at 38.71M bytes/sec, over the network. One core almost 100% busy at receiving side.
38Mbyte/sec is a bit slow isn't it?
Well, the other side is a tiny computer in half a shoe box. That's why I wanted to change the compression level to "1". Is first compressed, then encrypted with LUKS.
I guess it's a trade-off, but for a backup, I would not want to loose 50% of my bandwidth to compression and encryption.
Encryption is a must, compression is extra. If it takes longer, so be it... It is going faster than I initially predicted, because this machine doesn't have USB3.
Anyway, if you're worried about space, maybe use cromfs? You can't rewrite the volumes, but you have random-access to the individual files. Probably will take quite some time with really large volumes.
But I do want read/write access, I want to use rsync. And I do it live, while the machine is working... For frozen snapshots I have another disk (bootable) to image the system partitions only. I'm wondering about the compression ratio btrfs achieves, though. Last measure, nothing or beyond the decimals. du --si -sc /mnt/BookTelcontar 1.8T /mnt/BookTelcontar df --si /mnt/BookTelcontar Filesystem Size Used Avail Use% Mounted on /dev/mapper/cr_my_book_tlcntr 8.1T 1.8T 6.3T 22% /mnt/BookTelcontar I must taylor the command to write megabytes units. - -- Cheers, Carlos E. R. (from openSUSE 15.0 x86_64 at Telcontar) -----BEGIN PGP SIGNATURE----- iHoEARECADoWIQQZEb51mJKK1KpcU/W1MxgcbY1H1QUCXUP0zRwccm9iaW4ubGlz dGFzQHRlbGVmb25pY2EubmV0AAoJELUzGBxtjUfVfY0An2i8nzSXFbLR2MuGdHj9 IFzlc/MKAJ9h180sd8FLQYeoZ9cL3dImblf8gw== =jG/l -----END PGP SIGNATURE-----

Le 02/08/2019 à 10:31, Carlos E. R. a écrit :
WD My Book Essential 8TB 3.5" USB 3.0 Black 190,80€ (23,85€/TB) --> sweet price per terabyte. (then I found the My Book on Amazon for 181,82€)
160€ https://www.nierle.com/fr/article/682424/Seagate_Backup_Plus_Disque_dur_3.5_...
My Book Essential 4TB 3.5" USB 3.0 Black 142,19€ --> 35.54€/TB
90€: https://www.nierle.com/fr/article/12773/Intenso_Memory_Center_Disque_dur_3.5... jdd -- http://dodin.org -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org

On 02/08/2019 10.44, jdd@dodin.org wrote:
Le 02/08/2019 à 10:31, Carlos E. R. a écrit :
WD My Book Essential 8TB 3.5" USB 3.0 Black 190,80€ (23,85€/TB) --> sweet price per terabyte. (then I found the My Book on Amazon for 181,82€)
160€
https://www.nierle.com/fr/article/682424/Seagate_Backup_Plus_Disque_dur_3.5_...
Maybe the tax on multimedia copy we have.
My Book Essential 4TB 3.5" USB 3.0 Black 142,19€ --> 35.54€/TB
90€:
https://www.nierle.com/fr/article/12773/Intenso_Memory_Center_Disque_dur_3.5...
That's a lot. -- Cheers / Saludos, Carlos E. R. (from 15.0 x86_64 at Telcontar)

Le 02/08/2019 à 10:49, Carlos E. R. a écrit :
On 02/08/2019 10.44, jdd@dodin.org wrote:
Le 02/08/2019 à 10:31, Carlos E. R. a écrit :
WD My Book Essential 8TB 3.5" USB 3.0 Black 190,80€ (23,85€/TB) --> sweet price per terabyte. (then I found the My Book on Amazon for 181,82€)
160€
https://www.nierle.com/fr/article/682424/Seagate_Backup_Plus_Disque_dur_3.5_...
Maybe the tax on multimedia copy we have.
don't apply in fact when sent from Germany This dealer made money selling cd/dvd and now try to keep alive selling disks (and the rest). Quite cheap, very serious. add some money (around 8€) for sending, but fixed for each 30kg, so better order many things at the same time :-), I buy there for around 15 years :-) as said, I have one 4Tb and 2 5Tb disks for the same 3.5Tb archives, on various locations, one near me and connected all the time, one not far but only connected for rsync and an other far away (when working it was at work, now at the other side of the house) none never failed, but very little use time jdd -- http://dodin.org -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org

jdd@dodin.org wrote:
Le 02/08/2019 à 10:49, Carlos E. R. a écrit :
On 02/08/2019 10.44, jdd@dodin.org wrote:
Le 02/08/2019 à 10:31, Carlos E. R. a écrit :
WD My Book Essential 8TB 3.5" USB 3.0 Black 190,80€ (23,85€/TB) --> sweet price per terabyte. (then I found the My Book on Amazon for 181,82€)
160€
https://www.nierle.com/fr/article/682424/Seagate_Backup_Plus_Disque_dur_3.5_...
Maybe the tax on multimedia copy we have.
don't apply in fact when sent from Germany
This dealer made money selling cd/dvd and now try to keep alive selling disks (and the rest). Quite cheap, very serious. add some money (around 8€) for sending, but fixed for each 30kg, so better order many things at the same time :-), I buy there for around 15 years :-)
I was curious, I've never heard of that shop. At first they seemed like quite competitive prices - but for 2Tb drives (that we buy regularly), there is nothing saved - chf68 apiece + shipping = 83, here the same drive is 82, no shipping cost. -- Per Jessen, Zürich (18.9°C) http://www.dns24.ch/ - free dynamic DNS, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org

Le 02/08/2019 à 11:38, Per Jessen a écrit :
https://www.nierle.com/fr/article/682424/Seagate_Backup_Plus_Disque_dur_3.5_...
I was curious, I've never heard of that shop. At first they seemed like quite competitive prices - but for 2Tb drives (that we buy regularly), there is nothing saved - chf68 apiece + shipping = 83, here the same drive is 82, no shipping cost.
they do week end special offers, very interesting, you can give your mail to test. They are not spammers... jdd -- http://dodin.org -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org

Carlos E. R. wrote:
Anyway, if you're worried about space, maybe use cromfs? You can't rewrite the volumes, but you have random-access to the individual files. Probably will take quite some time with really large volumes.
But I do want read/write access, I want to use rsync. And I do it live, while the machine is working... For frozen snapshots I have another disk (bootable) to image the system partitions only.
I wonder if you are loosing sight of the objective (backup) and focusing on stuff that is secondary. For instance, I don't understand why you would need/want write access to a backup copy. -- Per Jessen, Zürich (18.9°C) http://www.hostsuisse.com/ - dedicated server rental in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org

Le 02/08/2019 à 11:20, Per Jessen a écrit :
on stuff that is secondary. For instance, I don't understand why you would need/want write access to a backup copy.
there is often confusion between backup and archives In my archives, mostly photo/video, I keep the hole source (video from camcorder, photos from sd or cf cards) for some years. I can't keep them all the time because it's extremely large and 3 years after the take, I'm pretty sure I wont redo the mixing/editing. So I remove all sources after a while (of course I keep all the edited stuff). This needs write access. if you make a backup for the life (mails, text files...) of course write only is better, I used dvd or BD. Hard drive is never really write only jdd -- http://dodin.org -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org

On 02/08/2019 11.20, Per Jessen wrote:
Carlos E. R. wrote:
Anyway, if you're worried about space, maybe use cromfs? You can't rewrite the volumes, but you have random-access to the individual files. Probably will take quite some time with really large volumes.
But I do want read/write access, I want to use rsync. And I do it live, while the machine is working... For frozen snapshots I have another disk (bootable) to image the system partitions only.
I wonder if you are loosing sight of the objective (backup) and focusing on stuff that is secondary. For instance, I don't understand why you would need/want write access to a backup copy.
Because I don't see how to use rsync if there is no write access... When I repeat the backup, I simply run again rsync on another directory with hardlinks to the old stuff. It is very efficient. The only advantage cromfs has is more compression. That is secondary to me, as compared to using rsync. Now that I'm using a btrfs, I could use snapshots instead of rsync hardlinks. I could try and see. -- Cheers / Saludos, Carlos E. R. (from 15.0 x86_64 at Telcontar)

Carlos E. R. wrote:
On 02/08/2019 11.20, Per Jessen wrote:
Carlos E. R. wrote:
Anyway, if you're worried about space, maybe use cromfs? You can't rewrite the volumes, but you have random-access to the individual files. Probably will take quite some time with really large volumes.
But I do want read/write access, I want to use rsync. And I do it live, while the machine is working... For frozen snapshots I have another disk (bootable) to image the system partitions only.
I wonder if you are loosing sight of the objective (backup) and focusing on stuff that is secondary. For instance, I don't understand why you would need/want write access to a backup copy.
Because I don't see how to use rsync if there is no write access... When I repeat the backup, I simply run again rsync on another directory with hardlinks to the old stuff. It is very efficient.
Aha, you are using the differential rsync method. Yes, that works very well. I was thinking more in terms of a backup copy, one file. When you are doing differential copies anyway, have you looked at how much space each copy really takes and how much you would actually gain by having it compressed? It might not be very much.
The only advantage cromfs has is more compression. That is secondary to me, as compared to using rsync.
I thought you said were short on space and therefore needed compression, see $SUBJ. -- Per Jessen, Zürich (18.3°C) http://www.hostsuisse.com/ - virtual servers, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org

On 02/08/2019 12.34, Per Jessen wrote:
Carlos E. R. wrote:
On 02/08/2019 11.20, Per Jessen wrote:
Carlos E. R. wrote:
Anyway, if you're worried about space, maybe use cromfs? You can't rewrite the volumes, but you have random-access to the individual files. Probably will take quite some time with really large volumes.
But I do want read/write access, I want to use rsync. And I do it live, while the machine is working... For frozen snapshots I have another disk (bootable) to image the system partitions only.
I wonder if you are loosing sight of the objective (backup) and focusing on stuff that is secondary. For instance, I don't understand why you would need/want write access to a backup copy.
Because I don't see how to use rsync if there is no write access... When I repeat the backup, I simply run again rsync on another directory with hardlinks to the old stuff. It is very efficient.
Aha, you are using the differential rsync method. Yes, that works very well. I was thinking more in terms of a backup copy, one file. When you are doing differential copies anyway, have you looked at how much space each copy really takes and how much you would actually gain by having it compressed? It might not be very much.
The compression ratio I'm reporting on another part of the thread, and it is not much. 10%?
The only advantage cromfs has is more compression. That is secondary to me, as compared to using rsync.
I thought you said were short on space and therefore needed compression, see $SUBJ.
Yes, of course, but the idea is not wasting space. Email compresses by half, and there is a lot of it. There are virtual machines. It hurts me to see that space wasted. It is 25 euros per terabyte. It is all a compromise. I do not want to increase compression at the cost of more work for me, or more days to do the backup. I would select compress=1 if I could, but the current Leap kernel does not support it. -- Cheers / Saludos, Carlos E. R. (from 15.0 x86_64 at Telcontar)

Carlos E. R. wrote:
On 02/08/2019 12.34, Per Jessen wrote:
Aha, you are using the differential rsync method. Yes, that works very well. I was thinking more in terms of a backup copy, one file. When you are doing differential copies anyway, have you looked at how much space each copy really takes and how much you would actually gain by having it compressed? It might not be very much.
The compression ratio I'm reporting on another part of the thread, and it is not much. 10%?
That's probably a very conservative estimate, but it depends on your data. The question is more - how much data changes for every backup/rsync run?
It is all a compromise. I do not want to increase compression at the cost of more work for me, or more days to do the backup. I would select compress=1 if I could, but the current Leap kernel does not support it.
I would be tempted to a) do a diff rsync copy, no compression, no encryption. b) when done, compress and encrypt on the backup machine. That way the backup is done as quickly as possible (at wire speed), compression and encryption can be done without impacting the backup time (as long as it is done before the next backup). -- Per Jessen, Zürich (20.7°C) http://www.dns24.ch/ - free dynamic DNS, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org

On 02/08/2019 13.25, Per Jessen wrote:
Carlos E. R. wrote:
On 02/08/2019 12.34, Per Jessen wrote:
Aha, you are using the differential rsync method. Yes, that works very well. I was thinking more in terms of a backup copy, one file. When you are doing differential copies anyway, have you looked at how much space each copy really takes and how much you would actually gain by having it compressed? It might not be very much.
The compression ratio I'm reporting on another part of the thread, and it is not much. 10%?
That's probably a very conservative estimate, but it depends on your data. The question is more - how much data changes for every backup/rsync run?
I don't know. Depends on the time interval. Not much, usually.
It is all a compromise. I do not want to increase compression at the cost of more work for me, or more days to do the backup. I would select compress=1 if I could, but the current Leap kernel does not support it.
I would be tempted to
a) do a diff rsync copy, no compression, no encryption.
I don't have the hardware for doing so. The entire backup disk is encrypted and compressed, and resides on another machine.
b) when done, compress and encrypt on the backup machine.
That way the backup is done as quickly as possible (at wire speed), compression and encryption can be done without impacting the backup time (as long as it is done before the next backup).
I see your idea now. I don't really mind the photo not being instantaneous. Another method is doing a full run, which may take days, then another one which will run faster because it only does the modifications in that period. I have another complication: the other machine is overheating and throttling down the CPU. It has no internal fan. I have increased the external fan speed, and will add "cpulimit" to the mix. -- Cheers / Saludos, Carlos E. R. (from 15.0 x86_64 at Telcontar)

Le 01/08/2019 à 17:18, Carlos E. R. a écrit :
is reasonable to compress (even at level 1, which would be my choice) files like email folders. Not compressing them seems a waste to me.
but waste of nearly free space (mail is usually under some gigabytes), and to recover them you need to uncompress (tar) or fond a compatible driver (file compression) I personally use 3 4/5 Tb byte disks, but you have now up to 8Tb disks at reasonable price (160€) https://www.nierle.com/fr/article/682424/Seagate_Backup_Plus_Disque_dur_3.5_... even SD cards of 1Tb for 50€ (incredible, if not from amazon, I would suspect a fake one) https://www.amazon.fr/Biaosner-m%C3%A9moire-Adaptateur-t%C3%A9l%C3%A9phone-T... jdd -- http://dodin.org -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org

On 01/08/2019 20.09, jdd@dodin.org wrote:
Le 01/08/2019 à 17:18, Carlos E. R. a écrit :
is reasonable to compress (even at level 1, which would be my choice) files like email folders. Not compressing them seems a waste to me.
but waste of nearly free space (mail is usually under some gigabytes), and to recover them you need to uncompress (tar) or fond a compatible driver (file compression)
Well, I'm using the later now. encrypted and compressed btrfs new 8 TB external hard disk (My Book). Sending everything in stages. Access to any file will be immediate, after connecting and mounting the disk. But btrfs does not report effective compression ratio. Tool "compsize" can say something, but is not in the distro.
I personally use 3 4/5 Tb byte disks, but you have now up to 8Tb disks at reasonable price (160€)
https://www.nierle.com/fr/article/682424/Seagate_Backup_Plus_Disque_dur_3.5_...
even SD cards of 1Tb for 50€ (incredible, if not from amazon, I would suspect a fake one)
https://www.amazon.fr/Biaosner-m%C3%A9moire-Adaptateur-t%C3%A9l%C3%A9phone-T...
Suspicious. Who said that, inspector Gadget? -- Cheers / Saludos, Carlos E. R. (from 15.0 x86_64 at Telcontar)

Le 01/08/2019 à 20:46, Carlos E. R. a écrit :
On 01/08/2019 20.09, jdd@dodin.org wrote:
even SD cards of 1Tb for 50€ (incredible, if not from amazon, I would suspect a fake one)
https://www.amazon.fr/Biaosner-m%C3%A9moire-Adaptateur-t%C3%A9l%C3%A9phone-T...
Suspicious. Who said that, inspector Gadget?
I often buy large sd card to see, in place like aliexpress or amazon, where I can be refunded :-) test with h2testw.exe (windows) or F3 (linux equivalent). Most of them are fake: use pagination, often on 8Mb pages [micro] sd cards are already pretty fragile (I have lot of them for photo/video, 32Gb or 64Gb now), and pretty slow (compared to ssd), so 1To is much more than what I want to have. jdd -- http://dodin.org -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org

On 01/08/2019 21.12, jdd@dodin.org wrote:
Le 01/08/2019 à 20:46, Carlos E. R. a écrit :
On 01/08/2019 20.09, jdd@dodin.org wrote:
even SD cards of 1Tb for 50€ (incredible, if not from amazon, I would suspect a fake one)
https://www.amazon.fr/Biaosner-m%C3%A9moire-Adaptateur-t%C3%A9l%C3%A9phone-T...
Suspicious. Who said that, inspector Gadget?
I often buy large sd card to see, in place like aliexpress or amazon, where I can be refunded :-)
test with h2testw.exe (windows) or F3 (linux equivalent). Most of them are fake: use pagination, often on 8Mb pages
[micro] sd cards are already pretty fragile (I have lot of them for photo/video, 32Gb or 64Gb now), and pretty slow (compared to ssd), so 1To is much more than what I want to have.
I don't want to risk it. I have some cards, good quality and brand names, for photo stuff. Cameras, tablet, phone, negative scanner... None has failed me. The biggest is 64M. -- Cheers / Saludos, Carlos E. R. (from 15.0 x86_64 at Telcontar)

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Thursday, 2019-08-01 at 17:18 +0200, Carlos E. R. wrote:
On 01/08/2019 08.52, Per Jessen wrote:
...
I agree, but it all depends on your requirements (which you didn't say much about).
A compressed filesystem - that's my requirement :-)
I would prefer ext4, which has the compressed flag since version 2 or so, but has never been implemented.
...
The backup is proceeding now at 38.71M bytes/sec, over the network. One core almost 100% busy at receiving side.
Now, with rsync stopped for adjustments, I took a minute for a check. Isengard:~ # time ( df --si /mnt/BookTelcontar ; echo ---- ; du --si -sc /mnt/BookTelcontar ; df --si /mnt/BookTelcontar ) Filesystem Size Used Avail Use% Mounted on /dev/mapper/cr_my_book_tlcntr 8.1T 900G 7.1T 12% /mnt/BookTelcontar - ---- 960G /mnt/BookTelcontar 960G total Filesystem Size Used Avail Use% Mounted on /dev/mapper/cr_my_book_tlcntr 8.1T 900G 7.1T 12% /mnt/BookTelcontar real 6m45.627s user 0m13.828s sys 1m3.189s Isengard:~ # Well, almost 7 minutes :-) It is using 900 GB (decimal) of disk space, but the files use 960GB - that might indicate the current compression efficiency. - -- Cheers, Carlos E. R. (from openSUSE 15.0 x86_64 at Telcontar) -----BEGIN PGP SIGNATURE----- iHoEARECADoWIQQZEb51mJKK1KpcU/W1MxgcbY1H1QUCXUOcmRwccm9iaW4ubGlz dGFzQHRlbGVmb25pY2EubmV0AAoJELUzGBxtjUfVUpIAn0GhK1Bns9/m3RVi6W6s ngXUfTVXAJ4z3nELKcZpsplkIrpcMhvmeDyB3A== =gvvR -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org

On Thursday, 2019-08-01 at 17:18 +0200, Carlos E. R. wrote:
On 01/08/2019 08.52, Per Jessen wrote:
The backup is proceeding now at 38.71M bytes/sec, over the network. One core almost 100% busy at receiving side.
Now, with rsync stopped for adjustments, I took a minute for a check.
Isengard:~ # time ( df --si /mnt/BookTelcontar ; echo ---- ; du --si -sc /mnt/BookTelcontar ; df --si /mnt/BookTelcontar ) Filesystem Size Used Avail Use% Mounted on /dev/mapper/cr_my_book_tlcntr 8.1T 900G 7.1T 12% /mnt/BookTelcontar ---- 960G /mnt/BookTelcontar 960G total Filesystem Size Used Avail Use% Mounted on /dev/mapper/cr_my_book_tlcntr 8.1T 900G 7.1T 12% /mnt/BookTelcontar
real 6m45.627s user 0m13.828s sys 1m3.189s Isengard:~ #
Well, almost 7 minutes :-)
It is using 900 GB (decimal) of disk space, but the files use 960GB - that might indicate the current compression efficiency.
Now, using: du: 1908132MB df: 1842512MB -- Cheers / Saludos, Carlos E. R. (from 15.0 x86_64 at Telcontar)

On 02/08/2019 11.51, Carlos E. R. wrote: ... (compression ratio)
It is using 900 GB (decimal) of disk space, but the files use 960GB - that might indicate the current compression efficiency.
Now, using:
du: 1908132MB df: 1842512MB
Now, using: du: 3023524MB df: 2950586MB (8 minutes to calculate) -- Cheers / Saludos, Carlos E. R. (from 15.0 x86_64 at Telcontar)

On 03/08/2019 02.26, Carlos E. R. wrote:
On 02/08/2019 11.51, Carlos E. R. wrote:
... (compression ratio)
It is using 900 GB (decimal) of disk space, but the files use 960GB - that might indicate the current compression efficiency.
Now, using:
du: 1908132MB df: 1842512MB
Now, using:
du: 3023524MB df: 2950586MB
(8 minutes to calculate)
du: 3767628MB df: 3693382MB 2%? 74.246GB saved with compression? I don't know if the estimate is correct. If it is, the compression is pitiful. I would need a better method, and per directory. -- Cheers / Saludos, Carlos E. R. (from 15.0 x86_64 at Telcontar)

On 08/03/2019 05:43 AM, Carlos E. R. wrote:
du: 3767628MB df: 3693382MB
2%? 74.246GB saved with compression? I don't know if the estimate is correct. If it is, the compression is pitiful. I would need a better method, and per directory.
Chuckling... 2%? Considering all the additional processing required to attempt the compression to begin with, you would expect to see much more than that. (unless your data is all binary to begin with -- they you wouldn't expect to see much at all -- making the compression somewhat superfluous) -- David C. Rankin, J.D.,P.E.

On 04/08/2019 22.13, David C. Rankin wrote:
On 08/03/2019 05:43 AM, Carlos E. R. wrote:
du: 3767628MB df: 3693382MB
2%? 74.246GB saved with compression? I don't know if the estimate is correct. If it is, the compression is pitiful. I would need a better method, and per directory.
Chuckling... 2%? Considering all the additional processing required to attempt the compression to begin with, you would expect to see much more than that.
Certainly.
(unless your data is all binary to begin with -- they you wouldn't expect to see much at all -- making the compression somewhat superfluous)
There is something going wrong. Maybe: * There is no compression support in fact. * The method I use to calculate the ratio does not work. I need a tool that calculates actual compression ratio per directory or per file. There is one, but the distro doesn't build it. I have to investigate about that. Even binaries should compress about 20% (just create a tgz or zip archive and see ratio to find out). Only movies, photos, audio, libreoffice files... would not compress at all. Mail should compress by half. -- Cheers / Saludos, Carlos E. R. (from 15.0 x86_64 at Telcontar)

On 03/08/2019 12.43, Carlos E. R. wrote:
On 03/08/2019 02.26, Carlos E. R. wrote:
On 02/08/2019 11.51, Carlos E. R. wrote:
... (compression ratio)
It is using 900 GB (decimal) of disk space, but the files use 960GB - that might indicate the current compression efficiency.
Now, using:
du: 1908132MB df: 1842512MB
Now, using:
du: 3023524MB df: 2950586MB
(8 minutes to calculate)
du: 3767628MB df: 3693382MB
2%? 74.246GB saved with compression? I don't know if the estimate is correct. If it is, the compression is pitiful. I would need a better method, and per directory.
Isengard:~ # time compsize --bytes /mnt/BookTelcontar/001/ Processed 3415745 files, 7871897 regular extents (8084571 refs), 1896373 inline. Type Perc Disk Usage Uncompressed Referenced TOTAL 97% 3687883543245 3783680719296 3879938206144 none 100% 3601384486854 3601384486854 3695579432902 zlib 47% 86499056391 182296232442 184358773242 real 33m58.873s user 0m47.452s sys 3m22.230s Isengard:~ # Detailed, per directory: Isengard:~ # /mnt/BookTelcontar/cosas/ratio ----> aeat Processed 54 files, 77 regular extents (77 refs), 30 inline. Type Perc Disk Usage Uncompressed Referenced TOTAL 94% 21245283 22425068 22425068 none 100% 19906639 19906639 19906639 zlib 53% 1338644 2518429 2518429 ----> core Type Perc Disk Usage Uncompressed Referenced TOTAL 16% 593920 3534848 3534848 zlib 16% 593920 3534848 3534848 ----> etc_13.1 Processed 4467 files, 1833 regular extents (1835 refs), 3340 inline. Type Perc Disk Usage Uncompressed Referenced TOTAL 37% 48467026 128233956 128250340 none 100% 3508385 3508385 3508385 zlib 36% 44958641 124725571 124741955 ----> media All empty or still-delalloced files. ----> opt Processed 77435 files, 42764 regular extents (42812 refs), 42057 inline. Type Perc Disk Usage Uncompressed Referenced TOTAL 72% 1665960377 2307782630 2308290534 none 100% 1184014830 1184014830 1184408046 zlib 42% 481945547 1123767800 1123882488 ----> sbin Processed 106 files, 168 regular extents (173 refs), 15 inline. Type Perc Disk Usage Uncompressed Referenced TOTAL 46% 5597859 12109899 12585035 none 100% 24594 24594 24594 zlib 46% 5573265 12085305 12560441 ----> subdomain No files. ----> usr Processed 925893 files, 503539 regular extents (515329 refs), 505449 inline. Type Perc Disk Usage Uncompressed Referenced TOTAL 54% 16552078656 30117955302 30971934438 none 100% 7981200568 7981200568 8023954616 zlib 38% 8570878088 22136754734 22947979822 ----> .config Type Perc Disk Usage Uncompressed Referenced TOTAL 58% 162 278 278 zlib 58% 162 278 278 ----> CopiaSeguridadParcial Processed 4460 files, 2050 regular extents (2052 refs), 3291 inline. Type Perc Disk Usage Uncompressed Referenced TOTAL 60% 124894373 207257877 207274261 none 100% 76757028 76757028 76757028 zlib 36% 48137345 130500849 130517233 ----> bin Processed 14 files, 20 regular extents (21 refs), 3 inline. Type Perc Disk Usage Uncompressed Referenced TOTAL 53% 967741 1804207 1832879 zlib 53% 967741 1804207 1832879 ----> lib Processed 10150 files, 16470 regular extents (16792 refs), 471 inline. Type Perc Disk Usage Uncompressed Referenced TOTAL 42% 411000398 969897047 986621015 none 100% 61928388 61928388 62161860 zlib 38% 349072010 907968659 924459155 ----> media.old All empty or still-delalloced files. ----> other All empty or still-delalloced files. ----> selinux No files. ----> tftpboot Processed 4 files, 29 regular extents (29 refs), 1 inline. Type Perc Disk Usage Uncompressed Referenced TOTAL 99% 13107264 13127744 13127744 none 100% 12980288 12980288 12980288 zlib 86% 126976 147456 147456 ----> var Processed 821887 files, 68695 regular extents (103928 refs), 746674 inline. Type Perc Disk Usage Uncompressed Referenced TOTAL 72% 7477227015 10263838126 10574843310 none 100% 5654517598 5654517598 5688076126 zlib 39% 1822709417 4609320528 4886767184 ----> .razor Processed 9 files, 1 regular extents (1 refs), 8 inline. Type Perc Disk Usage Uncompressed Referenced TOTAL 43% 5770 13325 13325 none 100% 30 30 30 zlib 43% 5740 13295 13295 ----> DEADJOE Type Perc Disk Usage Uncompressed Referenced TOTAL 40% 8192 20480 20480 zlib 40% 8192 20480 20480 ----> boot Processed 423 files, 436 regular extents (436 refs), 188 inline. Type Perc Disk Usage Uncompressed Referenced TOTAL 87% 69724124 79978928 79978928 none 100% 62669027 62669027 62669027 zlib 40% 7055097 17309901 17309901 ----> etc Processed 4859 files, 1388 regular extents (1391 refs), 3773 inline. Type Perc Disk Usage Uncompressed Referenced TOTAL 24% 17967622 74244613 74269189 none 100% 1372889 1372889 1372889 zlib 22% 16594733 72871724 72896300 ----> lib64 Processed 152 files, 223 regular extents (227 refs), 1 inline. Type Perc Disk Usage Uncompressed Referenced TOTAL 49% 7016513 14237761 14467137 none 100% 65 65 65 zlib 49% 7016448 14237696 14467072 ----> new Processed 7 files, 1 regular extents (1 refs), 6 inline. Type Perc Disk Usage Uncompressed Referenced TOTAL 39% 7127 17991 17991 none 100% 55 55 55 zlib 39% 7072 17936 17936 ----> root Processed 16903 files, 24431 regular extents (24435 refs), 11238 inline. Type Perc Disk Usage Uncompressed Referenced TOTAL 83% 8828420120 10520882950 10520903430 none 100% 8419410834 8419410834 8419402642 zlib 19% 409009286 2101472116 2101500788 ----> srv Processed 3665 files, 3438 regular extents (3438 refs), 1449 inline. Type Perc Disk Usage Uncompressed Referenced TOTAL 73% 363626900 493761326 493761326 none 100% 316271144 316271144 316271144 zlib 26% 47355756 177490182 177490182 ----> tmp Processed 1078 files, 1687 regular extents (1687 refs), 270 inline. Type Perc Disk Usage Uncompressed Referenced TOTAL 94% 491887457 519007313 519007313 none 100% 474316928 474316928 474316928 zlib 39% 17570529 44690385 44690385 ----> windows Processed 11 files, 2 regular extents (2 refs), 0 inline. Type Perc Disk Usage Uncompressed Referenced TOTAL 68% 98304 143360 143360 none 100% 53248 53248 53248 zlib 50% 45056 90112 90112 ----> home Processed 377720 files, 415772 regular extents (415843 refs), 100865 inline. Type Perc Disk Usage Uncompressed Referenced TOTAL 75% 30407940033 40325995960 40329129400 none 100% 23839382512 23839382512 23839624176 zlib 39% 6568557521 16486613448 16489505224 ----> home1 Processed 7342 files, 159138 regular extents (159138 refs), 2108 inline. Type Perc Disk Usage Uncompressed Referenced TOTAL 87% 68264001896 77661383993 77661289785 none 100% 63030540937 63030540937 63030446729 zlib 35% 5233460959 14630843056 14630843056 ----> home_aux Processed 277083 files, 734758 regular extents (737420 refs), 96155 inline. Type Perc Disk Usage Uncompressed Referenced TOTAL 96% 573456935373 596718881181 604555798941 none 100% 547157815641 547157815641 554812318041 zlib 53% 26299119732 49561065540 49743480900 ----> data Processed 881996 files, 5894942 regular extents (6057469 refs), 378980 inline. Type Perc Disk Usage Uncompressed Referenced TOTAL 98% 2979654751452 3013224129885 3100458632541 none 100% 2943087815226 2943087815226 3029551180346 zlib 52% 36566936226 70136314659 70907452195 real 12m13.545s user 0m43.632s sys 2m57.379s Isengard:~ # This seems to indicate that the system (including code and libraries) does indeed benefit from compression, except the directories that contain photos or videos, which take a lot of space and affect the total stats. -- Cheers / Saludos, Carlos E. R. (from 15.0 x86_64 at Telcontar)

Carlos E. R. wrote:
What filesystems do we have with transparent compression? Read/write, of course.
I intend to create a compressed backup external disk (Leap 15.0)
I know of btrfs and zfs. How many backups do you want to store on the backup disk? One or multiple?
If you want to store multiple backups, I would recommend Rsnapshot. Of course Rsnapshot is no compression solution, but because it uses hard links for unchanged files between backups it uses less disk space compared to other solutions. Greetings, Björn -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org

On 06/08/2019 21.48, Bjoern Voigt wrote:
Carlos E. R. wrote:
What filesystems do we have with transparent compression? Read/write, of course.
I intend to create a compressed backup external disk (Leap 15.0)
I know of btrfs and zfs. How many backups do you want to store on the backup disk? One or multiple?
If you want to store multiple backups, I would recommend Rsnapshot. Of course Rsnapshot is no compression solution, but because it uses hard links for unchanged files between backups it uses less disk space compared to other solutions.
You have missed somewhat on developments here :-) I'm using rsync on compressed and encrypted btrfs partition (using hardlinks to previous backup, too). With help from several people here, I got it working :-) -- Cheers / Saludos, Carlos E. R. (from 15.0 x86_64 at Telcontar)
participants (7)
-
Bjoern Voigt
-
Carlos E. R.
-
David C. Rankin
-
James Knott
-
jdd@dodin.org
-
Mathias Homann
-
Per Jessen