[opensuse] setting up a server with raid
Hello, (I'm pretty new to raid, read the raid wiki but not sure to understand...) finally, after some tests, I try the following config: * one partition on each (three) disks, raw around 1Tb * Raid 1 on the three partitions * lvm with one group in the 1Tb raid * three partitions in lvm: swap (2Gb), root (50Gb) and /data (the rest) from what I read grub should boot - at least yast install didn't complain and is installing now. I did this to have only one raid array for all the three partitions. I let aside gpt, because I don't have uefi on this computer and don't want to trick grub to boot gpt I know there is a special raid setup to boot, and hope yast takes care of that + last minute :-( problem It's the first time I use lvm. Yast obliged me to create a lvm group before the partitions and now it can't create it for "error -4017" may be it's only a warning, because I could clic on "continue" and the install is running I will see if it boots... thanks jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 2016-12-21 20:53, jdd wrote:
I did this to have only one raid array for all the three partitions. I let aside gpt, because I don't have uefi on this computer and don't want to trick grub to boot gpt
Grub2 boots from GPT just fine, no tricks. I'm using it on my main computer. The MBR has a syslinux image that understands GPT, one GPT partition is marked bootable, and grub2 is installed there. YaST did this on its own, not me. I had problems understanding it, though. But I use neither LVM nor RAID on the system. - -- Cheers / Saludos, Carlos E. R. (from 13.1 x86_64 "Bottle" (Minas Tirith)) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iF4EAREIAAYFAlhbJZoACgkQja8UbcUWM1wmLQD9EaeMqwCbidkj7UN2d5L0THPN dwC+usyXdK6npY1gKVYA/04Z7LYgKeq6Zzg/vpSpFHgasyg7fz9erE8epKZ1iako =igLZ -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Le 22/12/2016 à 02:00, Carlos E. R. a écrit :
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256
On 2016-12-21 20:53, jdd wrote:
I did this to have only one raid array for all the three partitions. I let aside gpt, because I don't have uefi on this computer and don't want to trick grub to boot gpt
Grub2 boots from GPT just fine, no tricks.
the problem is with BIOS legacy and GPT. In fact the (yast) install don't end, stops on grub error
But I use neither LVM nor RAID on the system.
apparently, if I use GPT, I need a special grub partition (special ID), and this can't be on lvm/raid. If so how can I have a bootable second disk? it needs to be reset for each kernel update. with my present config (if it works, I have 3 weeks to test it), I have one 1Tb partition (have to be less than 2Tb) on each disk, assembled as raid 1. on this raid, to be able to partition it, I have lvm and 2Gb swap, 50Gb root and the rest as data I expect it to give me three identical disks, that is three bootable disks with the first partition of 1Tb (and eventually some unused space) in fact, I had the -xxx error, but the install could continue. At Grub part, it lasted and being past midnight, I went to bed. This morning I found the computer up and running and could log and reboot. I still have to check how it works - is it really running as raid, as lvm... I have to learn the tools thanks jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Thu, Dec 22, 2016 at 10:53 AM, jdd <jdd@dodin.org> wrote: ...
Grub2 boots from GPT just fine, no tricks.
the problem is with BIOS legacy and GPT. In fact the (yast) install don't end, stops on grub error
But I use neither LVM nor RAID on the system.
apparently, if I use GPT, I need a special grub partition (special ID), and this can't be on lvm/raid.
Correct. Or you install grbu2 on partition and "generic code" in MBR (which in this case would be Syslinux GPTMBR). But if the only partition on disk is the one for Linux MD, this is not possible.
If so how can I have a bootable second disk? it needs to be reset for each kernel update.
You (or rather installer) create this special partition on the second disk and install grub2 on it.
with my present config (if it works, I have 3 weeks to test it), I have one 1Tb partition (have to be less than 2Tb) on each disk, assembled as raid 1.
Well ... YaST supports Linux MD RAID1 with 2 disks and should automatically configure bootloader installation on both disks. I am not sure whether it supports any other Linux MD (last time I got a look at the code it did not). It would actually be valid request that does not even require any major redesign - grub2 itself does not care whether Linux MD has 2, 3 or 23 disks, so YaST would just need to call grub-install for each of them.
on this raid, to be able to partition it, I have lvm and 2Gb swap, 50Gb root and the rest as data
That's fine as long as you have enough place to install grub outside of this partition. 1MB for default post-MBR gap or for BIO boot partition should be enough though. Of course if installer does not do it automatically, you can always manually install grub later; and I expect that adding third disk to /etc/default/grub_installdevice should ensure that and future grub update will be installed on all three disks. Not sure how YaST bootloader module will behave though (you tell :) ) -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Le 22/12/2016 à 09:06, Andrei Borzenkov a écrit :
Well ... YaST supports Linux MD RAID1 with 2 disks
also with three - at least it's what I configured in the installer and it seems to be ok
Of course if installer does not do it automatically, you can always manually install grub later; and I expect that adding third disk to /etc/default/grub_installdevice should ensure that and future grub update will be installed on all three disks. Not sure how YaST bootloader module will behave though (you tell :) )
right, there is still the mbr out of raid. and I just check that the three disks are in the /etc/default/grub_install file (with the word "activate") so yes, YaST do a very good job :-) thanks jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 2016-12-22 08:53, jdd wrote:
Le 22/12/2016 à 02:00, Carlos E. R. a écrit :
On 2016-12-21 20:53, jdd wrote:
I did this to have only one raid array for all the three partitions. I let aside gpt, because I don't have uefi on this computer and don't want to trick grub to boot gpt
Grub2 boots from GPT just fine, no tricks.
the problem is with BIOS legacy and GPT. In fact the (yast) install don't end, stops on grub error
Sorry, I forgot to say it: yes, the above paragraph applies to BIOS legacy. Yes, my main computer has BIOS legacy, GPT and Grub2. Boots fine.
But I use neither LVM nor RAID on the system.
apparently, if I use GPT, I need a special grub partition (special ID), and this can't be on lvm/raid. If so how can I have a bootable second disk? it needs to be reset for each kernel update.
That applies if Grub is in the MBR. Not if it is in a partition. And it has to be outside of LVM/Raid, yes. In the /boot partition. -- Cheers/Saludos Carlos E. R. (testing openSUSE Leap 42.2, at Minas-Anor) -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Thu, Dec 22, 2016 at 5:11 PM, Carlos E. R. <robin.listas@telefonica.net> wrote:
On 2016-12-22 08:53, jdd wrote: ...
apparently, if I use GPT, I need a special grub partition (special ID), and this can't be on lvm/raid. If so how can I have a bootable second disk? it needs to be reset for each kernel update.
That applies if Grub is in the MBR. Not if it is in a partition. And it has to be outside of LVM/Raid, yes. In the /boot partition.
Oh, no, please do not add to confusion. You are completely confused. The special partition is unrelated to /boot, it is used instead of post-MBR gap which does not (at least is not guaranteed to) exist on GPT. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 2016-12-22 15:47, Andrei Borzenkov wrote:
On Thu, Dec 22, 2016 at 5:11 PM, Carlos E. R. <robin.listas@telefonica.net> wrote:
On 2016-12-22 08:53, jdd wrote: ...
apparently, if I use GPT, I need a special grub partition (special ID), and this can't be on lvm/raid. If so how can I have a bootable second disk? it needs to be reset for each kernel update.
That applies if Grub is in the MBR. Not if it is in a partition. And it has to be outside of LVM/Raid, yes. In the /boot partition.
Oh, no, please do not add to confusion. You are completely confused. The special partition is unrelated to /boot, it is used instead of post-MBR gap which does not (at least is not guaranteed to) exist on GPT.
Yes, I know. But it is only needed if grub would be installed in the MBR of a GPT disk. It is not needed if grub if grub is installed in the root partition. You told me that in another thread. I did not mean that the special partition had to be in /boot, but that Grub would have to be in a /boot partition outside of the LVM or Raid. I did not say or mean that it was related to /boot. Sorry if I failed to explain myself correctly. - -- Cheers / Saludos, Carlos E. R. (from 13.1 x86_64 "Bottle" (Minas Tirith)) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iF0EAREIAAYFAlhclgIACgkQja8UbcUWM1yHJwD43Zm7ADWjWBgN9D9N2XE7DO5m 4h6my0pFCgdBbGoHKwD9GdumdniUfRblTlBkf8mrVdQee4bGcRBbBYDt8hp1LX4= =dVDK -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 2016-12-23 04:24, Andrei Borzenkov wrote:
23.12.2016 06:12, Carlos E. R. пишет:
I did not mean that the special partition had to be in /boot, but that Grub would have to be in a /boot partition outside of the LVM or Raid.
grub does *not* need to be outside of LVM or RAID. Do not spread town legends.
No? Well, things have improved. I read it was the case in raid howto, ages ago. - -- Cheers / Saludos, Carlos E. R. (from 13.1 x86_64 "Bottle" (Minas Tirith)) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iF4EAREIAAYFAlhcmjcACgkQja8UbcUWM1xMSAD/bJeazhcc42F9KNI6ZUjpQC2H 6YENI8ULJKJaLUHyqHgBAJgnovFfLM1DPkTmF5iNxGP+2ubx6CD3dCs+XSaiOryC =ILUa -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 23/12/16 03:29, Carlos E. R. wrote:
On 2016-12-23 04:24, Andrei Borzenkov wrote:
23.12.2016 06:12, Carlos E. R. пишет:
I did not mean that the special partition had to be in /boot, but that Grub would have to be in a /boot partition outside of the LVM or Raid.
grub does *not* need to be outside of LVM or RAID. Do not spread town legends.
No? Well, things have improved. I read it was the case in raid howto, ages ago.
The RAID howto is obsolete. Has been for a long time, but you know how stuff never dies on the web :-) It was obsolete when the raid wiki was created for Grub 1 and kernel 2.6 - disclaimer - I'm now updating the wiki to Grub 2 and kernel 4. The problem with booting a system now is there are so many options, and in fact, Andrei is wrong here - Andrei, how on earth is the bios supposed to read a raid or lvm disk to find grub? Although I think Carlos is also confused here - there is no need for /boot to be on its own partition. Grub isn't installed in /boot, although that's where all its user-space files live. Historically, grub was ALWAYS installed in either the MBR, or a partition boot record. With an MBR partition table, this is still the case. With a GPT, you need to give grub its own partition. Let's take a totally modern, up-to-date system. You use UEFI, which boots the kernel directly, and grub is obsolete. UEFI needs a partition on disk where it can store all its secure-boot files, its keystore stuff, the hardware drivers it needs, etc etc. And it also stores the linux kernel there. Now let's look at mbr/gpt. Back in the old days, grub was stored in the mbr, and your first partition would typically start in sector 2. That's why boot-loaders had to be tiny. And why lilo always had to be regenerated every time you updated the kernel. And why, if you try to install grub2 on an old system it will fail. And why the MS bootloader and the linux bootloaders used to stomp on each other so often by accident or design. And why any modern fdisk now starts your first partition at sector 2048 or thereabouts! That leaves a meg of free space at the start of your disk for all the elementary system startup stuff you need! When GPT came along, I suspect they decided that even leaving a meg of space at the start of the disk was asking for trouble - different programs could try use the - allegedly - unallocated space in different ways stomping over each other and causing havoc. So now if you want to install grub on a GPT disk, you have to give it its own small partition. Which means, if you have a raid setup, you should install grub on EVERY disk. I run a 2-disk raid-1 mirror, and that's my setup. If the first drive fails, the second drive should be an exact copy right down to the boot code and grub, so it'll come straight back up on that. The other thing to watch out for now is that, with a modern setup, you MUST use an initramfs to boot a grub raid setup (with one minor exception, namely a v1.0 superblock and raid-1). Cheers, Wol -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
24.12.2016 01:33, Wols Lists пишет:
On 23/12/16 03:29, Carlos E. R. wrote:
On 2016-12-23 04:24, Andrei Borzenkov wrote:
23.12.2016 06:12, Carlos E. R. пишет:
I did not mean that the special partition had to be in /boot, but that Grub would have to be in a /boot partition outside of the LVM or Raid.
grub does *not* need to be outside of LVM or RAID. Do not spread town legends.
No? Well, things have improved. I read it was the case in raid howto, ages ago.
...
The problem with booting a system now is there are so many options, and in fact, Andrei is wrong here - Andrei, how on earth is the bios supposed to read a raid or lvm disk to find grub?
The statement was about part of grub that is located in /boot filesystem, not part that is loaded by firmware. ...
Which means, if you have a raid setup, you should install grub on EVERY disk.
Which is exactly what YaST did for original poster. ... -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Le 24/12/2016 à 07:32, Andrei Borzenkov a écrit :
Which means, if you have a raid setup, you should install grub on EVERY disk.
Which is exactly what YaST did for original poster.
...
yes, and anyway it's a 1.0 and mirror (raid 1) one jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Le 22/12/2016 à 15:11, Carlos E. R. a écrit :
On 2016-12-22 08:53, jdd wrote:
the problem is with BIOS legacy and GPT. In fact the (yast) install don't end, stops on grub error
Sorry, I forgot to say it: yes, the above paragraph applies to BIOS legacy. Yes, my main computer has BIOS legacy, GPT and Grub2. Boots fine.
as I said, my BIOS/GPT install stopped with a grub error. I can't say much more. No problem with msdo table by the way, in expert mode (option in the bottom right of the screen), YaST allows the change d os/gpt jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
jdd wrote:
Hello,
(I'm pretty new to raid, read the raid wiki but not sure to understand...)
finally, after some tests, I try the following config:
* one partition on each (three) disks, raw around 1Tb * Raid 1 on the three partitions * lvm with one group in the 1Tb raid * three partitions in lvm: swap (2Gb), root (50Gb) and /data (the rest)
from what I read grub should boot - at least yast install didn't complain and is installing now.
Personally, I would not boot from LVM, I would have created a separate RAID1 on two separate partitions. Just my experience, I am not familiar with grub and lvm in combination, but I tend to keep LVM away from the boot-sequence.
It's the first time I use lvm. Yast obliged me to create a lvm group before the partitions and now it can't create it for "error -4017"
There is even a bugreport on it - https://bugzilla.novell.com/show_bug.cgi?id=584970 Apparently fixed long ago though. -- Per Jessen, Zürich (-0.3°C) http://www.cloudsuisse.com/ - your owncloud, hosted in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Le 22/12/2016 à 08:28, Per Jessen a écrit :
jdd wrote:
It's the first time I use lvm. Yast obliged me to create a lvm group before the partitions and now it can't create it for "error -4017"
There is even a bugreport on it - https://bugzilla.novell.com/show_bug.cgi?id=584970
Apparently fixed long ago though.
at least there is a button "continue". I clicked on it and the install finished jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
jdd wrote:
Le 22/12/2016 à 08:28, Per Jessen a écrit :
jdd wrote:
It's the first time I use lvm. Yast obliged me to create a lvm group before the partitions and now it can't create it for "error -4017"
There is even a bugreport on it - https://bugzilla.novell.com/show_bug.cgi?id=584970
Apparently fixed long ago though.
at least there is a button "continue". I clicked on it and the install finished
I would reopen that bugreport - if it was fixed in 11.x, it seems to have reappeared. You could also have a look in the yast logs to see what might have happened. -- Per Jessen, Zürich (-0.4°C) http://www.cloudsuisse.com/ - your owncloud, hosted in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Le 22/12/2016 à 08:48, Per Jessen a écrit :
I would reopen that bugreport - if it was fixed in 11.x, it seems to have reappeared. You could also have a look in the yast logs to see what might have happened.
are the install logs stored after install finish? thanks jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Thu, Dec 22, 2016 at 10:56 AM, jdd <jdd@dodin.org> wrote:
Le 22/12/2016 à 08:48, Per Jessen a écrit :
I would reopen that bugreport - if it was fixed in 11.x, it seems to have reappeared. You could also have a look in the yast logs to see what might have happened.
are the install logs stored after install finish?
/var/log/YaST2 -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Le 22/12/2016 à 09:13, Andrei Borzenkov a écrit :
On Thu, Dec 22, 2016 at 10:56 AM, jdd <jdd@dodin.org> wrote:
are the install logs stored after install finish?
/var/log/YaST2
usual place, then, good thanks jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 2016-12-22 10:11, jdd wrote:
Le 22/12/2016 à 09:13, Andrei Borzenkov a écrit :
On Thu, Dec 22, 2016 at 10:56 AM, jdd <jdd@dodin.org> wrote:
are the install logs stored after install finish?
/var/log/YaST2
usual place, then, good
But they rotate, -- Cheers/Saludos Carlos E. R. (testing openSUSE Leap 42.2, at Minas-Anor) -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Carlos E. R. wrote:
On 2016-12-22 10:11, jdd wrote:
Le 22/12/2016 à 09:13, Andrei Borzenkov a écrit :
On Thu, Dec 22, 2016 at 10:56 AM, jdd <jdd@dodin.org> wrote:
are the install logs stored after install finish?
/var/log/YaST2
usual place, then, good
But they rotate,
Eventually, but it takes a long time. The installation log is y2log-1.gz (or something like that), the current one is y2log. -- Per Jessen, Zürich (2.2°C) http://www.cloudsuisse.com/ - your owncloud, hosted in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 2016-12-22 15:43, Per Jessen wrote:
Carlos E. R. wrote:
On 2016-12-22 10:11, jdd wrote:
Le 22/12/2016 à 09:13, Andrei Borzenkov a écrit :
On Thu, Dec 22, 2016 at 10:56 AM, jdd <> wrote:
are the install logs stored after install finish?
/var/log/YaST2
usual place, then, good
But they rotate,
Eventually, but it takes a long time. The installation log is y2log-1.gz (or something like that), the current one is y2log.
I'm unsure. During install there are lots of times you run yast, a lot of activity. Once I lost the part of interest this way. -- Cheers/Saludos Carlos E. R. (testing openSUSE Leap 42.2, at Minas-Anor) -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Carlos E. R. wrote:
On 2016-12-22 15:43, Per Jessen wrote:
Carlos E. R. wrote:
On 2016-12-22 10:11, jdd wrote:
Le 22/12/2016 à 09:13, Andrei Borzenkov a écrit :
On Thu, Dec 22, 2016 at 10:56 AM, jdd <> wrote:
are the install logs stored after install finish?
/var/log/YaST2
usual place, then, good
But they rotate,
Eventually, but it takes a long time. The installation log is y2log-1.gz (or something like that), the current one is y2log.
I'm unsure. During install there are lots of times you run yast, a lot of activity.
For an installation, I usually only start yast once, that's it. Looking at a fairly current system: # l /var/log/YaST2/y2* -rw-r--r-- 1 root root 2750 Jul 11 2015 /var/log/YaST2/y2changes -rw-r--r-- 1 root root 2174856 Dec 21 10:22 /var/log/YaST2/y2log -rw-r--r-- 1 root root 376946 Jul 11 2015 /var/log/YaST2/y2log-1.gz -rw-r--r-- 1 root root 531 Jul 11 2015 /var/log/YaST2/y2logmkinitrd -rw-r--r-- 1 root root 9212 Jul 11 2015 /var/log/YaST2/y2start.log If it's a desktop, these days I have to start yast to enable network printers, but that's all. -- Per Jessen, Zürich (0.2°C) http://www.dns24.ch/ - free dynamic DNS, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 2016-12-22 19:06, Per Jessen wrote:
Carlos E. R. wrote:
I'm unsure. During install there are lots of times you run yast, a lot of activity.
For an installation, I usually only start yast once, that's it. Looking at a fairly current system:
# l /var/log/YaST2/y2* -rw-r--r-- 1 root root 2750 Jul 11 2015 /var/log/YaST2/y2changes -rw-r--r-- 1 root root 2174856 Dec 21 10:22 /var/log/YaST2/y2log -rw-r--r-- 1 root root 376946 Jul 11 2015 /var/log/YaST2/y2log-1.gz -rw-r--r-- 1 root root 531 Jul 11 2015 /var/log/YaST2/y2logmkinitrd -rw-r--r-- 1 root root 9212 Jul 11 2015 /var/log/YaST2/y2start.log
If it's a desktop, these days I have to start yast to enable network printers, but that's all.
minas-tirith:~ # l /var/log/YaST2/y2* - -rw-r--r-- 1 root root 11486 Oct 13 2015 /var/log/YaST2/y2changes - -rw-r--r-- 1 root root 9434587 Dec 8 03:55 /var/log/YaST2/y2log - -rw-r--r-- 1 root root 510125 Dec 1 02:26 /var/log/YaST2/y2log-1.gz - -rw-r--r-- 1 root root 473254 Sep 27 01:58 /var/log/YaST2/y2log-2.gz - -rw-r--r-- 1 root root 497072 Sep 27 01:28 /var/log/YaST2/y2log-3.gz - -rw-r--r-- 1 root root 493455 Aug 30 00:06 /var/log/YaST2/y2log-4.gz - -rw-r--r-- 1 root root 457566 Aug 29 23:58 /var/log/YaST2/y2log-5.gz - -rw-r--r-- 1 root root 471470 Aug 29 23:54 /var/log/YaST2/y2log-6.gz - -rw-r--r-- 1 root root 445838 Aug 29 23:51 /var/log/YaST2/y2log-7.gz - -rw-r--r-- 1 root root 488591 Aug 29 23:46 /var/log/YaST2/y2log-8.gz - -rw-r--r-- 1 root root 495654 Jul 25 03:37 /var/log/YaST2/y2log-9.gz - -rw-r--r-- 1 root root 0 Jan 6 2014 /var/log/YaST2/y2logMount - -rw-r--r-- 1 root root 370 Jan 6 2014 /var/log/YaST2/y2log_bootloader - -rw-r--r-- 1 root root 5966 Jan 6 2014 /var/log/YaST2/y2logmkinitrd - -rw-r--r-- 1 root root 3577 Jan 6 2014 /var/log/YaST2/y2start.log minas-tirith:~ # You see, I have many more, but this system is much older than that. On the history file, the first entry goes back to 2009. On a system recently installed: Isengard:~ # l /var/log/YaST2/y2* - -rw-r--r-- 1 root root 9777 Nov 28 02:54 /var/log/YaST2/y2changes - -rw-r--r-- 1 root root 7060176 Dec 21 11:24 /var/log/YaST2/y2log - -rw-r--r-- 1 root root 618265 Nov 28 16:26 /var/log/YaST2/y2log-1.gz - -rw-r--r-- 1 root root 621634 Nov 27 02:07 /var/log/YaST2/y2log-2.gz - -rw-r--r-- 1 root root 601684 Nov 26 23:37 /var/log/YaST2/y2log-3.gz - -rw-r--r-- 1 root root 3661 Nov 26 23:37 /var/log/YaST2/y2start.log Isengard:~ # This one does have the installation records, I understand. - -- Cheers / Saludos, Carlos E. R. (from 13.1 x86_64 "Bottle" (Minas Tirith)) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux) iF4EAREIAAYFAlhcl2EACgkQja8UbcUWM1zLTAD/REALGxfW4O6cdthUUrojmb1T m7CmAXIg9yeBT4qflcsA+wVAhAeewtTjsj6oVGsE2uzzr7uLRs2NQc+iuZY2U6bm =iWZn -----END PGP SIGNATURE----- -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Le 22/12/2016 à 08:48, Per Jessen a écrit :
There is even a bugreport on it - https://bugzilla.novell.com/show_bug.cgi?id=584970
updated jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Le 22/12/2016 à 08:48, Per Jessen a écrit :
jdd wrote:
Le 22/12/2016 à 08:28, Per Jessen a écrit :
There is even a bugreport on it - https://bugzilla.novell.com/show_bug.cgi?id=584970
Apparently fixed long ago though.
at least there is a button "continue". I clicked on it and the install finished
I would reopen that bugreport - if it was fixed in 11.x, it seems to have reappeared. You could also have a look in the yast logs to see what might have happened.
reopened jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Le 21/12/2016 à 20:53, jdd a écrit :
I will see if it boots...
it boots and all seems to be set like I want it jdd https://wiki.archlinux.org/index.php/RAID (worth a read to fill in the gaps on mdadm, see the recommendation on 'scrubbing') -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Le 24/12/2016 à 08:33, David C. Rankin a écrit :
Le 21/12/2016 à 20:53, jdd a écrit :
I will see if it boots...
it boots and all seems to be set like I want it
jdd
https://wiki.archlinux.org/index.php/RAID
(worth a read to fill in the gaps on mdadm, see the recommendation on 'scrubbing')
I already read a large part of the raid wiki (but can't say I understood all :-() I still have to look to see if scrubbing is already configured by yast as it should be. IMHO it's a very important part of the install, specially with old ordinary hdd - also one of the reason I used three disk + a backup right now this is a test config for which I use stock hardware, I may as well completely drop raid if I find it too complicated or too expensive, but I had never the occasion to play with it :-)) jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 12/24/2016 02:31 AM, jdd wrote:
right now this is a test config for which I use stock hardware, I may as well completely drop raid if I find it too complicated or too expensive, but I had never the occasion to play with it :-))
raid is never to complicated or expensive. mdadm is absolutely bullet-proof even in a simply mirror config. The setup is simple (just follow the steps in the archwiki) The scrubbing isn't part of the default config, it is something you trigger (weekly at 00:00 Monday is a good time). Rebuilds are automatic when you replace a device (fail, remove, add) mdadm is so flexible, there is virtually no scenario that you cannot gracefully recover from. The author Neil brown, is very active on the linux-raid list and is only an e-mail away at linux-raid@vger.kernel.org if you run into anything that isn't explained by the documentation (very friendly list and can walk you though any one-off situation) Literally, I've found mdadm the best fire-and-forget weapon against data loss on a server you can ask for. For both OS and Data. I install the bootloader on all disks that make up my OS raid. If any one disk fails, you can automatically boot from the remaining disk(s) regardless of which one fails. For data, there is no better protection against disk failure. You can literally just unplug a disk for fun, the server continues as if nothing as happened. With a hot-swap setup, you just pop the bad drive out, put in the new (or you already have the hot-spare configured) and recovery is done virtually automatically. For the cost of an additionally drive, the data security it buys is priceless. I have no complaints at all after using both linux-raid (mdadm) and fake-raid (dmraid) since 2001. I've used linux-raid exclusively for the past 8 years or so and wouldn't recommend anything else. I've also run the 8-port lsi megaraid cards with all write-back battery-protected cache. The reality is you gain no performance unless you are saturating your I/O between the software and hardware solutions (and you are locked into a proprietary hardware raid format) For my uses, I found no benefit and simply went back to linux-raid. If you get stuck, just ask, and we are happy to help. It is well worth working though the setup if you intend to run a server (even if it is just to keep your family pictures on) -- David C. Rankin, J.D.,P.E. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Le 29/12/2016 à 02:28, David C. Rankin a écrit :
On 12/24/2016 02:31 AM, jdd wrote:
right now this is a test config for which I use stock hardware, I may as well completely drop raid if I find it too complicated or too expensive, but I had never the occasion to play with it :-))
raid is never to complicated or expensive. mdadm is absolutely bullet-proof
there is an other thread around that shows it's not bullet proof and strong advisory on the raid wiki not to use consumer grade disk. This alone makes it expensive. I can use several disks because I already have then in stock, so they are not free but cheap. it's a bit complicated because the wiki says very hard than one can kill it's config trying to recover. also, the commands to regain access to the disk in case of big failure (for example mobo failure) is more complicated than the usual rescue through bind mount. Fact is than in more than 10 years of server use, I had only *one* disk failure (last summer) and I have no real "instant recovery" needs, so recovering from a backup is possible. I'm trying raid mostly as a game and to use my sleeping hardware :-) and I know openSUSE community (and open source community at large) is very friendly :-) thanks jdd (NB: I trim the post, but I read all :-) -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 29/12/16 07:35, jdd wrote:
Le 29/12/2016 à 02:28, David C. Rankin a écrit :
On 12/24/2016 02:31 AM, jdd wrote:
right now this is a test config for which I use stock hardware, I may as well completely drop raid if I find it too complicated or too expensive, but I had never the occasion to play with it :-))
raid is never to complicated or expensive. mdadm is absolutely bullet-proof
there is an other thread around that shows it's not bullet proof and strong advisory on the raid wiki not to use consumer grade disk. This alone makes it expensive. I can use several disks because I already have then in stock, so they are not free but cheap.
The reason you shouldn't use consumer disks is that you can NOT alter the time-out on them, and the disk defaults interact badly with the linux defaults. That said, the difference in price between NAS and consumer disks isn't much - I think a 3TB Barracuda (bad choice) is currently £70, while a WD Red (good choice) is £100. The thing with a consumer grade disk, even if you're not doing raid, is that a problem with it will cause your computer to appear to hang. So desktop disks aren't really even suitable for a desktop! :=)
it's a bit complicated because the wiki says very hard than one can kill it's config trying to recover. also, the commands to regain access to the disk in case of big failure (for example mobo failure) is more complicated than the usual rescue through bind mount.
Fact is than in more than 10 years of server use, I had only *one* disk failure (last summer) and I have no real "instant recovery" needs, so recovering from a backup is possible.
Having seen several disk failures, I'd much rather try and recover a broken raid, than a broken disk. I think I've managed to recover a maximum of two broken disks, and I've seen a few more than that that I couldn't recover.
I'm trying raid mostly as a game and to use my sleeping hardware :-)
I've got a raid mirror, because backups would be a lot of hassle. I do try and make sure anything important is flushed to DVD, but my home server has a 2TB /home partition, and it's over half full ...
and I know openSUSE community (and open source community at large) is very friendly :-)
As is the linux raid community :-) That warning on the raid page is more really to scare people into coming to the list sooner rather than later. Unfortunately there's some really bad raid advice out there on the net that people fall foul of. And if somebody comes to the list BEFORE they try anything "dangerous" on their array, we can usually recover it. It's when they do something "stupid" because the net told them to, that it gets difficult. And the wiki is meant to be pretty comprehensive because, well, we see a lot of the tricky recoveries, and you'll notice that most of the pages say "gather this information and post it to the list", because then hopefully we can come straight back and say - "got it, do this, and your array should be back fine". The next cases that need writing up are where something has trashed the disk headers (GPT etc). There've been several successful array recoveries recently where that has happened ... Cheers, Wol -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Le 29/12/2016 à 09:16, Wols Lists a écrit :
The reason you shouldn't use consumer disks is that you can NOT alter the time-out on them,
it's what I have read
consumer disks isn't much - I think a 3TB Barracuda (bad choice) is currently £70, while a WD Red (good choice) is £100.
the disks I use are already in my hand, previously used for archives (barely running, but now I use 5Tb archives) and I have around 5 of them :-) so free :-)
Having seen several disk failures, I'd much rather try and recover a broken raid, than a broken disk.
I never try to recover a broken disk for myself (often do for other less backup addicts than I am :-(), apart for learning purpose. Always use the backup. even with max care you can lose data, but the most is because human error, not hardware failure
I've got a raid mirror, because backups would be a lot of hassle.
raid is definitively not a backup, don't protect from mkfs to the wrong disk (last error I did that was a problem: two days to rebuild from the backup) - I wanted to format a small sd card and typed the wrong dev... my bad. that's why I have *three* archives disks, two offline (not counting the most important things on the server).
etc). There've been several successful array recoveries recently where that has happened ...
good to know :-) I presently use old hardware (Dell optiplex 760) I had for free - I never had unrecoverable problem with old hardware - but I may use cheap Hp server with dual disk/raid and some kind of hosted backup in the near future thanks jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Thu, Dec 29, 2016 at 4:44 AM, jdd <jdd@dodin.org> wrote:
The reason you shouldn't use consumer disks is that you can NOT alter the time-out on them,
it's what I have read
consumer disks isn't much - I think a 3TB Barracuda (bad choice) is currently £70, while a WD Red (good choice) is £100.
the disks I use are already in my hand, previously used for archives (barely running, but now I use 5Tb archives) and I have around 5 of them :-) so free :-)
I hope "5Tb archives" is a figure of speech. These Seagate "Archive" disks are designed in such a way they are guarenteed bad for use in a RAID: http://www.seagate.com/enterprise-storage/hard-disk-drives/archive-hdd/ I posted about it a week ago: http://markmail.org/message/4evo2eis7sqkyfvf In particular writes in some cases can take 20 seconds to complete. A RAID system will eject a drive behaving like that from the mix without any actual media failures. SMR technology is the culprit and it apparently was first used commercially in 5TB drives. I saw a report where one user put 2 5TB Seagate drives into a RAID 6 setup. Both of them were ejected from the raid set in short order. The good news was that the raid was rebuilt shortly before a 3rd SMR based drive was ejected. Greg -- Greg Freemyer -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Le 29/12/2016 à 20:01, Greg Freemyer a écrit :
I hope "5Tb archives" is a figure of speech.
These Seagate "Archive" disks are designed in such a way they are guarenteed bad for use in a RAID:
no, but they are usb 3 external archives, no raid involved and only online the time necessary for rsync around once a month (round robin), and I archive only data, not system. If ever I work on sensible data, I do an other backup when necessary jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On Thu, Dec 29, 2016 at 2:06 PM, jdd <jdd@dodin.org> wrote:
Le 29/12/2016 à 20:01, Greg Freemyer a écrit :
I hope "5Tb archives" is a figure of speech.
These Seagate "Archive" disks are designed in such a way they are guarenteed bad for use in a RAID:
no, but they are usb 3 external archives, no raid involved and only online the time necessary for rsync around once a month (round robin), and I archive only data, not system.
If ever I work on sensible data, I do an other backup when necessary
jdd
Good The person that had the double HDD failure thought he would be smart and removed 2 5TB drives from their USB-3 enclosure and added them to his existing RAID array and rebuilt it to incorporate them. The drives were apparently the Seagate 5TB Archive drives. Data loss was avoided by pure luck from what I saw of his situation. Greg -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 12/29/2016 01:35 AM, jdd wrote:
there is an other thread around that shows it's not bullet proof and strong advisory on the raid wiki not to use consumer grade disk. This alone makes it expensive. I can use several disks because I already have then in stock, so they are not free but cheap.
Yes, I've read this, "mostly FUD", and I always wonder if it is some disgruntled kid who picked up a keyboard after a failure and wrote the sky is falling... I've never seen much difference in consumer grade or server grade (and many manuf, WD in particular, put out disks under the same label as both consumer and server grade) Other than the rash of seagate failures about 6 years ago, I've rarely had disks run less than 40,000 hours (that's about 4.5 years of spinning) One old Maxtor DiamondMax 10 on a suse 11.0 box has been spinning for twice that, and another Seagate Barracuda ST3750528AS with 57806 hours and still ticking... My experience with isn't much worse than yours. I've probably lost less than 10 disks over 16 years out of 4 servers I have continually running. I've lost 'server' grade drives as quickly as I've lost consumer drives. In the quantity I buy, about 3-5 per year for all purposes, not just server, you are not going to see a difference. I'm sure if you are buying hundreds of disks per year, you will probably see a statistical benefit from the MTBF of the 'server' grade disks. I remember it seeming complicated at first, but there are really only a handful of mdadm operations. After having done it a time or two, screwed it up and time or two, recovered from my screw up (wiping partition tables, intentionally creating degraded arrays with 'missing' disks, moving non-raid systems to linux-raid, etc..) there really isn't much to it -- other than getting over the anxiety of issuing the 'mdadm foo' command to begin with. Good luck with your sleeping hardware. Either way you go, whatever 'grade' disks you use, you can't go wrong with linux-raid. -- David C. Rankin, J.D.,P.E. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 29/12/16 08:45, David C. Rankin wrote:
Yes, I've read this, "mostly FUD", and I always wonder if it is some disgruntled kid who picked up a keyboard after a failure and wrote the sky is falling... I've never seen much difference in consumer grade or server grade (and many manuf, WD in particular, put out disks under the same label as both consumer and server grade) Other than the rash of seagate failures about 6 years ago, I've rarely had disks run less than 40,000 hours (that's about 4.5 years of spinning) One old Maxtor DiamondMax 10 on a suse 11.0 box has been spinning for twice that, and another Seagate Barracuda ST3750528AS with 57806 hours and still ticking...
Read the wiki page on "timeout mismatch". https://raid.wiki.kernel.org/index.php/Timeout_Mismatch It's my guess that the manufacturer has one production line, and if the drives pass strict QA they stick enterprise firmware on it and call it an enterprise drive. If it passes ordinary QA they stick desktop firmware on it and call it a desktop drive. The problem with desktop drives isn't the drives. It's putting them in a linux raid array. The desktop drive defaults - well, they're not defaults, they're hard coded settings - interact badly with the linux defaults, requiring "expert" configuration to make the array safe. It is my experience that FULLY HALF and more of the broken arrays that come to the linux raid mailing list for help, are desktop drives and the problem is timeout mismatch. It only takes one transient error on a desktop drive, to knock it out of the array. Trying to recover causes a second drive to go, and your raid-novice is screaming "the raid ate my data". Desktop drives are fine IF YOU KNOW WHAT YOU'RE DOING. Trouble is, too many people don't... Cheers, Wol -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Le 29/12/2016 à 10:48, Wols Lists a écrit :
It is my experience that FULLY HALF and more of the broken arrays that come to the linux raid mailing list for help, are desktop drives and the problem is timeout mismatch.
It only takes one transient error on a desktop drive, to knock it out of the array. Trying to recover causes a second drive to go, and your raid-novice is screaming "the raid ate my data". Desktop drives are fine IF YOU KNOW WHAT YOU'RE DOING. Trouble is, too many people don't...
it's what I understood, but I didn't see how to cope with this (other than changing disks) what is your experience? thanks jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 29/12/16 09:56, jdd wrote:
Le 29/12/2016 à 10:48, Wols Lists a écrit :
It is my experience that FULLY HALF and more of the broken arrays that come to the linux raid mailing list for help, are desktop drives and the problem is timeout mismatch.
It only takes one transient error on a desktop drive, to knock it out of the array. Trying to recover causes a second drive to go, and your raid-novice is screaming "the raid ate my data". Desktop drives are fine IF YOU KNOW WHAT YOU'RE DOING. Trouble is, too many people don't...
it's what I understood, but I didn't see how to cope with this (other than changing disks)
what is your experience?
I've got two 3TB Seagate Barracudas, so not only are they desktop drives, they're also the very drive that has a dire reputation (Barracudas on the whole are fine, it was the 3TB ones that were dying left right and centre). That said, I've not had any trouble at all. And with a mirror, trouble with the raid isn't quite so serious as with eg raid-5. There's a script on that web page that alters the linux settings so that you don't have a timeout mismatch. That doesn't stop an apparent hang if you get a problem, though. You MUST run that script EVERY boot. I don't, so if I hit trouble ... it'll be my own fault. That said, I'm planning to rebuild my computer when I can afford it, and I'll probably be buying Seagate NAS drives - dunno why, I much prefer Seagate to WD. Prejudice, I guess. Cheers, Wol -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Le 29/12/2016 à 11:12, Wols Lists a écrit :
There's a script on that web page that alters the linux settings so that you don't have a timeout mismatch. That doesn't stop an apparent hang if you get a problem, though.
this https://github.com/fukawi2/raid-check/blob/master/raid-check.sh seems only to check the disk. Do you have a pointer to do what you say?
You MUST run that script EVERY boot.
don't seems to be too much a hassle on a 24/24 server :-) thanks jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 29/12/16 13:03, jdd wrote:
Le 29/12/2016 à 11:12, Wols Lists a écrit :
There's a script on that web page that alters the linux settings so that you don't have a timeout mismatch. That doesn't stop an apparent hang if you get a problem, though.
this
https://github.com/fukawi2/raid-check/blob/master/raid-check.sh
seems only to check the disk. Do you have a pointer to do what you say?
Look for the script that adjusts the kernel read timeouts.
You MUST run that script EVERY boot.
don't seems to be too much a hassle on a 24/24 server :-)
thanks jdd
Cheers, Wol -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Le 29/12/2016 à 15:57, Wols Lists a écrit :
https://raid.wiki.kernel.org/index.php/Timeout_Mismatch
Look for the script that adjusts the kernel read timeouts.
You MUST run that script EVERY boot.
thanks, I already read this page but forgot where :-( jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 2016-12-29 15:57, Wols Lists wrote:
On 29/12/16 13:03, jdd wrote:
Le 29/12/2016 à 11:12, Wols Lists a écrit :
There's a script on that web page that alters the linux settings so that you don't have a timeout mismatch. That doesn't stop an apparent hang if you get a problem, though.
this
https://github.com/fukawi2/raid-check/blob/master/raid-check.sh
seems only to check the disk. Do you have a pointer to do what you say?
Look for the script that adjusts the kernel read timeouts.
This? #!/bin/bash for i in /dev/sd? ; do if smartctl -l scterc,70,70 $i > /dev/null ; then echo -n $i " is good " else echo 180 > /sys/block/${i/\/dev\/}/device/timeout echo -n $i " is bad " fi; smartctl -i $i | egrep "(Device Model|Product:)" blockdev --setra 1024 $i done -- Cheers / Saludos, Carlos E. R. (from 42.2 x86_64 "Malachite" at Telcontar)
jdd wrote:
Fact is than in more than 10 years of server use, I had only *one* disk failure (last summer) and I have no real "instant recovery" needs, so recovering from a backup is possible.
When you have no high-availability requirements, RAID is certainly overkill, yes. As for disks dying - - they WILL die. In my experience, the newer drives die faster than older. - on our external rented servers (21), all with RAID1 mirrors, after the first 5-6 years, I have a fairly steady rate of 2 dead drives per annum. - on our in-house storage servers (10x24 drives), we have about one dead drive per month. - disks tend to die at the most inopportune moments - like when everyone's out of reach. Always use hot standby drives. Of course, this is with an enterprise-style 24/7 duty cycle. -- Per Jessen, Zürich (-0.2°C) http://www.cloudsuisse.com/ - your owncloud, hosted in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
Le 29/12/2016 à 10:38, Per Jessen a écrit :
- disks tend to die at the most inopportune moments - like when everyone's out of reach. Always use hot standby drives.
right. The last disk failure I experienced was on a newly rented server, with no fresh backup and I was in west indies for parents funerals when this happened two weeks with stopped server. too bad for the users :-( jdd -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
David C. Rankin wrote:
On 12/24/2016 02:31 AM, jdd wrote:
right now this is a test config for which I use stock hardware, I may as well completely drop raid if I find it too complicated or too expensive, but I had never the occasion to play with it :-))
raid is never to complicated or expensive. mdadm is absolutely bullet-proof even in a simply mirror config. The setup is simple.
Couldn't agree more. -- Per Jessen, Zürich (-0.2°C) http://www.hostsuisse.com/ - virtual servers, made in Switzerland. -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org
On 24.12.2016 09:31, jdd wrote:
I still have to look to see if scrubbing is already configured by yast as it should be. IMHO it's a very important part of the install, specially with old ordinary hdd - also one of the reason I used three disk + a backup
On my 13.2 server there is a cronjob coming from the mdadm package: cat /etc/cron.d/mdadm # # cron.d/mdadm - regular redundancy checks # # Start checking each month early in the morning. # Continue each day until all done PATH=/sbin:/usr/sbin:/bin:/usr/bin 0 4 * * 0 root source /etc/sysconfig/mdadm; [ -n "$MDADM_CHECK_DURATION" -a -x /usr/share/mdadm/mdcheck -a $(date +\%d) -le 7 ] && /usr/share/mdadm/mdcheck --duration "$MDADM_CHECK_DURATION" 0 4 * * 1-6 root source /etc/sysconfig/mdadm; [ -n "$MDADM_CHECK_DURATION" -a -x /usr/share/mdadm/mdcheck ] && /usr/share/mdadm/mdcheck --continue --duration "$MDADM_CHECK_DURATION" which does the job. Unfortunately this job is missing in 42.2. This is a bug, since i cannot see a raplacement for this (no systemd timer). I opened a bug report.
participants (8)
-
Andrei Borzenkov
-
Carlos E. R.
-
David C. Rankin
-
Florian Gleixner
-
Greg Freemyer
-
jdd
-
Per Jessen
-
Wols Lists