On Mon, Dec 21, 2020 at 4:53 PM cagsm <cumandgets0mem00f(a)gmail.com> wrote:
On Mon, Dec 21, 2020 at 4:00 PM cagsm
Revisiting this thread about robut file systems
and what to use these
days on disk storage:
checksums for data from the day one. I have no idea what you mean.
My understanding was that they first only had metadata checksums and
only crc32 or something they wrote about, and in these
days they go
for better checksum algo and for data itself.
Originally only crc32c was implemented, currently btrfs supports
additionally xxhash64, sha256, blake2b. What is "better" or "worse"
depends on your criteria. Personally I think that for detecting random
corruption crc32 is probably good enough and no hash function is
immune to collisions.
Mybad? So can I happily
go for btrfs and restore my data even if my single physical disk has
somewhat of an outage, defective blocks?
To restore defective blocks you need a good copy of these blocks.
btrfs checksum enables btrfs to detect data corruption; it is not
replacement for a second good copy of data.
On a single disk you can use dup profile for metadata and/or data to
have two copies.
Or what is exactly covered
and handled by this checksums for data bytes itself?
Huh? Checksum for data bytes covers integrity of data bytes. It allows
btrfs to detect data corruption and to avoid silently returning
corrupted data to application. If there is a second (or third) good
copy of the same data, btrfs will use it instead and will replace the
corrupted copy by a good copy (not sure if it happens automatically on
read though, scrub does it).