On 11/14/21 3:56 AM, Klaus Vink Slott wrote:
Hi.
I use a retired office pc with core i5 with as a NAS. I have dropped in 2 disks (3.5T sata) and configured as md raid1 and with lvm on top. It has worked flawless for a year or so, but now after the latest 2 reboots it has triggered a raid rebuilding.
Only thing I find in dmesg (but I am not sure what to search for)
klaus@raagi:~> sudo dmesg | grep " md" [ 4.131071] md/raid1:md127: active with 2 out of 2 mirrors [ 4.282183] md127: detected capacity change from 0 to 4000261472256 [ 64.407382] md: data-check of RAID array md127 [21681.409218] md: md127: data-check interrupted. [25477.760407] md: data-check of RAID array md127 [47096.271827] md: md127: data-check interrupted.
I left the PC running after reboot, so I am not sure why the data-check was interrupted.
Any ideas on why it has started to rebuild after every reboot?
Also, add details about what OS, Release or at least the kernel, mdadm and LVM versions. (LVM shouldn't matter) You can figure right at about 2 hours per-TB to rebuild (or sync) a RAID1 array. First thing I would check is 'smartctl -a /dev/drive' and get the health of each individual disk. You can do that from a boot disk, or when the system is running. Where drive would be, e.g. sda or sdc, etc.. (whatever the disk is). You can find out which physical disks it thinks are part of the array with 'mdadm -D /dev/md127' (as root). It will show, e.g. # mdadm -D /dev/md4 /dev/md4: Version : 1.2 Creation Time : Mon Mar 21 02:27:21 2016 Raid Level : raid1 Array Size : 2930135488 (2794.39 GiB 3000.46 GB) Used Dev Size : 2930135488 (2794.39 GiB 3000.46 GB) Raid Devices : 2 Total Devices : 2 Persistence : Superblock is persistent Intent Bitmap : Internal Update Time : Sat Nov 13 19:53:10 2021 State : clean Active Devices : 2 Working Devices : 2 Failed Devices : 0 Spare Devices : 0 Consistency Policy : bitmap Name : valkyrie:4 (local to host valkyrie) UUID : 6e520607:f152d8b9:dd2a3bec:5f9dc875 Events : 12697 Number Major Minor RaidDevice State 3 8 32 0 active sync /dev/sdc 2 8 48 1 active sync /dev/sdd (look at last two lines here ------------------------ ^^^^^^^^ ) I hate to venture a guess, but it looks like one of your disks either isn't initialized when the array is started and then appears later prompting an add of that disk to the array and a rebuild (check dmesg for each drive initialization), or the disk is just losing its mind each time it is shutdown. This may need to be asked on the mdadm list, e.g. linux-raid@vger.kernel.org BACKUP your DATA from the GOOD disk NOW! (fix the problem after that) -- David C. Rankin, J.D.,P.E.