On 10/04/2016 02:34 PM, Carlos E. R. wrote:
On 2016-10-04 19:03, John Andersen wrote:
On 10/04/2016 07:58 AM, Carlos E. R. wrote:
Entire distros use tar.gz as package managers.
With checksums. If there is a problem you download it again, so its not an issue.
Ahn, no, Carlos, you've confused the issue. You're doing to much of the Marshall McCluhan "Medium *IS* the message" thing. Checksum on download of a ISO or RPM or whatever that contains a TAR or CPIO or whatever package that has been compressed served another purposes well. Not just 'is it corrupted in transmission" but 'is the copy you got the one that packagers intended or one that was created by hackers'. A package can have internal checksums for its segments (and co can a file compression method for that matter) ... No, wait ... <quote src="https://en.wikipedia.org/wiki/Gzip"> https://en.wikipedia.org/wiki/Gzip </quote> What is it you are gzipping? You may also not be aware of it but CRC-32 can do some error correction as well as detection. Of course you policy for this depends on many things. In IPV4 there is a CRC on the header but the stack just discards erroneous frames and ask for retransmission. This policy is based on the relative cost of processing vs the cost of and the reliability of the network, and in IPV6 the whole issue of frame header CRC has been discontinued. On hard disks the CRC information can and is used to verify the correctness of the data in the sector just read. It can be and is used for error correction as well. I thin this is the the case since back in 1982 I wrote a a low level disk driver for the RL02 on a PDP-11 based V6 UNIX for carrier grade application for a telco that did this. A repeated error caused the corrected data to be re-written elsewhere and the low level disk mapping in the driver taking care of this redirection. This is now normal practice with modern disk drives and is taken care of by the on-baord electronics so that the computer operating system sees an unblemished linear array of sectors no matter how they might be organized at the physical level. Again, this is risk management issue. The computational cost vs the cost of ... well what else could you do, this isn't like the network where you can ask for retransmission. We're going to face the same thing when we have the Interplanetary Internet. The (time) cost of asking for a network packet repeat will be excessive. Perhaps TAR'ing up the whole system or FS and expecting there to be only one or two errors in something that large is what worries you? Well perhaps you shouldn't take that big a bite of the cake. You might read this: https://www.g-loaded.eu/2007/12/01/choosing-a-format-for-data-backups-tar-vs... Which might also lead you to conclude that some other means of making backups or of making archives is needed. And that I can't argue with. -- A: Yes. > Q: Are you sure? >> A: Because it reverses the logical flow of conversation. >>> Q: Why is top posting frowned upon? -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org