(In reply to Wenruo Qu from comment #7) > A lot of transid errors are the ones we didn't expect: > > parent transid verify failed on 8781824 wanted 276925 found 277794 > parent transid verify failed on 1095892992 wanted 276925 found 277710 > > They are all writes in the future. > > I can not really say what's the cause, but some guesses include: > > - Broken COW > AKA, writes into some existing metadata. > This may happen if your cache is corrupted. > > - Bad cache management of the underlying stack > It can be dm-crypto or hardware not handling write cache. > But I doubt, as all the metadata corruption are happening > for both copies. > > Furthermore, the corruption is not limited to extent tree, but also some fs > trees. > > Thankfully it looks only root 1628 is corrupted, thus "-o ro,rescue=all" may > be able to mount, > allowing you to backup most things except something in subvolume 1628. mount worked, but my home is still missing, I guess because it's not in the same subvolume, right? > I don't really believe we can repair the whole fs back to RW status, due to > so many corrupted extent trees. > > But if you really want an adventure, after backup all your data, you may > want to try your luck with "btrfs check --init-exten-tree". I will try that :)