Problem with software raid and Tumbleweed
Hello, I got Tumbleweed Snapshot20210115 and wanted to install it on a rather old machine with a software raid1 (UEFI raid). On this raid some Windows 10 is already existing. The installer of Tumbleweed Snapshot20210115 does not recognize this software raid1. It only shows single harddiscs and no raid volume. An older snapshot 20190704 is able to see that software raid1 but I cannot do any zypper dup with that one (this version was running previously before some hardware crash). Any help for that - either a snapshot which can handle software raid1 or some way to set zypper dup running again with the older snapshot? BR and thanks in advance Markus
On 1/25/21 4:07 PM, Markus Egg wrote:
Hello,
I got Tumbleweed Snapshot20210115 and wanted to install it on a rather old machine with a software raid1 (UEFI raid).
On this raid some Windows 10 is already existing.
The installer of Tumbleweed Snapshot20210115 does not recognize this software raid1. It only shows single harddiscs and no raid volume.
An older snapshot 20190704 is able to see that software raid1 but I cannot do any zypper dup with that one (this version was running previously before some hardware crash).
Any help for that - either a snapshot which can handle software raid1 or some way to set zypper dup running again with the older snapshot?
BR and thanks in advance Markus
If this is windows on motherboard raid, it's probably dmraid (otherwise known as Fake RAID or BIOS RAID) -- it's not really Fake, it's just the moniker dmraid ended up with from the hardware RAID snobs... You can use # lspci -v to find the RAID controller. You can use # dmraid -r and # dmraid -s to look at existing arrays. In the past, openSUSE had the dmraid module as a normal part of the setup. I don't have tumbleweed, but that should be reasonable able to determine whether 'dm_mod' and 'dm_multipath' modules are loaded with: lsmod | grep dm Worth checking to see what RAID you have. -- David C. Rankin, J.D.,P.E.
On 26/01/2021 01.27, David C. Rankin wrote:
On 1/25/21 4:07 PM, Markus Egg wrote:
Hello, ...
If this is windows on motherboard raid, it's probably dmraid (otherwise known as Fake RAID or BIOS RAID) -- it's not really Fake, it's just the moniker dmraid ended up with from the hardware RAID snobs...
It is really fake, because it doesn't run in hardware: it runs in software, on the computer CPU, with read support on BIOS so that it can boot. Once booted it gets write code from the driver, running on the mainboard CPU, not on the raid chipset. A true hardware raid doesn't use the mainboard CPU, and is transparent to the operating system. -- Cheers / Saludos, Carlos E. R. (from openSUSE 15.1 (Legolas))
On 1/26/21 6:37 AM, Carlos E. R. wrote:
On 26/01/2021 01.27, David C. Rankin wrote:
On 1/25/21 4:07 PM, Markus Egg wrote:
Hello, ...
If this is windows on motherboard raid, it's probably dmraid (otherwise known as Fake RAID or BIOS RAID) -- it's not really Fake, it's just the moniker dmraid ended up with from the hardware RAID snobs...
It is really fake, because it doesn't run in hardware: it runs in software, on the computer CPU, with read support on BIOS so that it can boot. Once booted it gets write code from the driver, running on the mainboard CPU, not on the raid chipset.
A true hardware raid doesn't use the mainboard CPU, and is transparent to the operating system.
I'm sure Neil Brown and the rest on the linux-raid list would be surprised to learn it is fake... It's software... (and the overhead ceased being measurable when 486 came out) Fake is far superior to hardware. Just have a battery die on your hardware card and drop from write-back to write-through... and then find out your battery was discontinued 3 years ago. Now you have a hardware specific RAID install that can no longer benefit from the hardware write-back performance at all... Though, unless you are saturating whatever your setup is -- it really doesn't matter. -- David C. Rankin, J.D.,P.E.
In data mercoledì 27 gennaio 2021 04:11:31 CET, David C. Rankin ha scritto:
On 1/26/21 6:37 AM, Carlos E. R. wrote:
On 26/01/2021 01.27, David C. Rankin wrote:
On 1/25/21 4:07 PM, Markus Egg wrote:
Hello,
...
If this is windows on motherboard raid, it's probably dmraid (otherwise known as Fake RAID or BIOS RAID) -- it's not really Fake, it's just the moniker dmraid ended up with from the hardware RAID snobs...
It is really fake, because it doesn't run in hardware: it runs in software, on the computer CPU, with read support on BIOS so that it can boot. Once booted it gets write code from the driver, running on the mainboard CPU, not on the raid chipset.
A true hardware raid doesn't use the mainboard CPU, and is transparent to the operating system.
I'm sure Neil Brown and the rest on the linux-raid list would be surprised to learn it is fake...
It's software... (and the overhead ceased being measurable when 486 came out)
Fake is far superior to hardware. Just have a battery die on your hardware card and drop from write-back to write-through... and then find out your battery was discontinued 3 years ago. Now you have a hardware specific RAID install that can no longer benefit from the hardware write-back performance at all... Though, unless you are saturating whatever your setup is -- it really doesn't matter.
Not to mention that ZFS cannot be used fruitfully with hardware RAID. And actually, BIOS RAID is often limited to the Windows world as the producer do not invest in drivers for the kernel, that way the discs even if set as RAID in BIOS of the controller are seen as single discs (with the subsequent chaos). That said, for what I have seen, in a world were a controller can be just written off, you will then have recent hardware, were the argument "battery discontinued" is less valid. And if you go for high quality SAS discs, you will anyway end up with a "hard" RAID controller or at least with a hybrid one. Performance: I have two mdadm running here on a Phaenom board with TW and 32 GB of RAM and when you do video cut or similar, you do notice the limitations. But overall, for normal operations of office, I agree you do not. A question about "battery": is it still an argument if you system is backed up by an UPS? I thought the batteries on controller cards were paramount only on systems that are not protected by a convenient UPS. Wrong?
On 27/01/2021 10.45, Stakanov wrote: ...
A question about "battery": is it still an argument if you system is backed up by an UPS? I thought the batteries on controller cards were paramount only on systems that are not protected by a convenient UPS. Wrong?
The idea is, I assume, that battery backed hardware raid detects that the power died and commits everything to disk before finally powering down the HDs. An UPS will keep the entire computer running till it finally gives up, or either the human or software powers it down before the battery runs down. Not really the same thing, but both avoid disasters. -- Cheers / Saludos, Carlos E. R. (from 15.2 x86_64 at Telcontar)
Carlos E. R. wrote:
On 27/01/2021 10.45, Stakanov wrote:
A question about "battery": is it still an argument if you system is backed up by an UPS? I thought the batteries on controller cards were paramount only on systems that are not protected by a convenient UPS. Wrong?
The idea is, I assume, that battery backed hardware raid detects that the power died and commits everything to disk before finally powering down the HDs.
The battery is there to power the write cache memory until mains power returns. These days it is flash backed cache, not battery backed. -- Per Jessen, Zürich (0.0°C) http://www.dns24.ch/ - free dynamic DNS, made in Switzerland.
On 27/01/2021 12.49, Per Jessen wrote:
Carlos E. R. wrote:
On 27/01/2021 10.45, Stakanov wrote:
A question about "battery": is it still an argument if you system is backed up by an UPS? I thought the batteries on controller cards were paramount only on systems that are not protected by a convenient UPS. Wrong?
The idea is, I assume, that battery backed hardware raid detects that the power died and commits everything to disk before finally powering down the HDs.
The battery is there to power the write cache memory until mains power returns. These days it is flash backed cache, not battery backed.
I guess that cache has a significant size? -- Cheers / Saludos, Carlos E. R. (from 15.2 x86_64 at Telcontar)
Carlos E. R. wrote:
On 27/01/2021 12.49, Per Jessen wrote:
Carlos E. R. wrote:
On 27/01/2021 10.45, Stakanov wrote:
A question about "battery": is it still an argument if you system is backed up by an UPS? I thought the batteries on controller cards were paramount only on systems that are not protected by a convenient UPS. Wrong?
The idea is, I assume, that battery backed hardware raid detects that the power died and commits everything to disk before finally powering down the HDs.
The battery is there to power the write cache memory until mains power returns. These days it is flash backed cache, not battery backed.
I guess that cache has a significant size?
It depends - in earlier days 64Mb, 128Mb, 256Mb, today 1024Mb. (maybe more). -- Per Jessen, Zürich (0.0°C) http://www.hostsuisse.com/ - virtual servers, made in Switzerland.
On 27/01/2021 13.04, Per Jessen wrote:
Carlos E. R. wrote:
On 27/01/2021 12.49, Per Jessen wrote:
Carlos E. R. wrote:
On 27/01/2021 10.45, Stakanov wrote:
A question about "battery": is it still an argument if you system is backed up by an UPS? I thought the batteries on controller cards were paramount only on systems that are not protected by a convenient UPS. Wrong?
The idea is, I assume, that battery backed hardware raid detects that the power died and commits everything to disk before finally powering down the HDs.
The battery is there to power the write cache memory until mains power returns. These days it is flash backed cache, not battery backed.
I guess that cache has a significant size?
It depends - in earlier days 64Mb, 128Mb, 256Mb, today 1024Mb. (maybe more).
Let me see. A "Seagate BarraCuda 3.5" 4TB SATA3" has an internal buffer of 256MB. Then those cards don't have a "significant size" of memory, IMHO. And begs the question about who backs up the hard disk buffer memory. Suppose the computer sends 3 write operations and then loses power. The first operation completes to the "rust". The second is waiting in the internal buffer of the hard disk, the third is still on the card buffer. Just a "suppose" situation. On power restore, operation 2 is lost, but operation 3, out of sequence, is applied. Can be a disaster... so what, disable the internal disk buffer? Can the card buffer be as efficient as the internal disk buffer? I doubt it. Best thing would be for the disk to also have a backup battery or capacitor. -- Cheers / Saludos, Carlos E. R. (from 15.2 x86_64 at Telcontar)
Carlos E. R. wrote:
On 27/01/2021 13.04, Per Jessen wrote:
Carlos E. R. wrote:
On 27/01/2021 12.49, Per Jessen wrote:
Carlos E. R. wrote:
On 27/01/2021 10.45, Stakanov wrote:
A question about "battery": is it still an argument if you system is backed up by an UPS? I thought the batteries on controller cards were paramount only on systems that are not protected by a convenient UPS. Wrong?
The idea is, I assume, that battery backed hardware raid detects that the power died and commits everything to disk before finally powering down the HDs.
The battery is there to power the write cache memory until mains power returns. These days it is flash backed cache, not battery backed.
I guess that cache has a significant size?
It depends - in earlier days 64Mb, 128Mb, 256Mb, today 1024Mb. (maybe more).
Let me see. A "Seagate BarraCuda 3.5" 4TB SATA3" has an internal buffer of 256MB. Then those cards don't have a "significant size" of memory, IMHO.
I'm not sure if that is pertinent :-) The built-in cache on the disk drive is for speeding up reads, not writes.
And begs the question about who backs up the hard disk buffer memory.
It is not used for write caching. -- Per Jessen, Zürich (2.2°C) http://www.hostsuisse.com/ - dedicated server rental in Switzerland.
On Thu, Jan 28, 2021 at 10:40 AM Per Jessen <per@computer.org> wrote:
The built-in cache on the disk drive is for speeding up reads, not writes.
And begs the question about who backs up the hard disk buffer memory.
It is not used for write caching.
It is. Unless HDD has internal non-volatile backup (some have) it should really be disabled, and this is normally default when used with RAID controllers - except some vendors lie and retain write cache. Good for benchmarks ...
On 28/01/2021 10:02, Andrei Borzenkov wrote:
On Thu, Jan 28, 2021 at 10:40 AM Per Jessen <per@computer.org> wrote:
The built-in cache on the disk drive is for speeding up reads, not writes.
And begs the question about who backs up the hard disk buffer memory.
It is not used for write caching.
It is. Unless HDD has internal non-volatile backup (some have) it should really be disabled, and this is normally default when used with RAID controllers - except some vendors lie and retain write cache. Good for benchmarks ...
Yeah, you're right, I know there is a setting to enable it. We check every new drive when they are installed, just in case - we have not seen any IBM/Hitachi/Seagate/WDC drive with write cache enabled in at least 10 years. I don't know if we ever have, but we don't keep the test records for longer than that. -- Per Jessen, Herrliberg (2.6°C)
On 28/01/2021 10.13, Per Jessen wrote:
On 28/01/2021 10:02, Andrei Borzenkov wrote:
On Thu, Jan 28, 2021 at 10:40 AM Per Jessen <per@computer.org> wrote:
The built-in cache on the disk drive is for speeding up reads, not writes.
And begs the question about who backs up the hard disk buffer memory.
It is not used for write caching.
Are you sure? I understood that the on disk cache, by default, is read/write.
It is. Unless HDD has internal non-volatile backup (some have) it should really be disabled, and this is normally default when used with RAID controllers - except some vendors lie and retain write cache. Good for benchmarks ...
Ah.
Yeah, you're right, I know there is a setting to enable it. We check every new drive when they are installed, just in case - we have not seen any IBM/Hitachi/Seagate/WDC drive with write cache enabled in at least 10 years. I don't know if we ever have, but we don't keep the test records for longer than that.
Ok, you mean that hard disks that are specifically made for raid usage, have the write cache disabled? Or that you manually disable it? I don't know, I never purchased disks made for raid. And then, the same disks for normal usage have the write cache enabled? If I do "hdparm -I" on a normal disk of this computer, I get: Configuration: ... cache/buffer size = unknown ... Commands/features: Enabled Supported: * SMART feature set Security Mode feature set * Power Management feature set * Write cache <=== -- Cheers / Saludos, Carlos E. R. (from 15.2 x86_64 at Telcontar)
Carlos E. R. wrote:
Yeah, you're right, I know there is a setting to enable it. We check every new drive when they are installed, just in case - we have not seen any IBM/Hitachi/Seagate/WDC drive with write cache enabled in at least 10 years. I don't know if we ever have, but we don't keep the test records for longer than that.
Ok, you mean that hard disks that are specifically made for raid usage, have the write cache disabled? Or that you manually disable it?
No, harddisks generally have write cache disabled because it is a potential problem unless you have a UPS. Like I wrote, we check the setting whenever we install or replace a drive. AFAIR, we had to make a small utility, hdparm did not support it. Maybe it does now, I don't know.
I don't know, I never purchased disks made for raid.
They are not specifically made for raid - WDC used to have a series called "RAID edition", but I think they've stopped that. I don't know if the default settings vary depending on whether they are industry or consumer drives, it's possible. I also see write cache enabled on my laptop, for instance. -- Per Jessen, Zürich (11.4°C) http://www.dns24.ch/ - your free DNS host, made in Switzerland.
28.01.2021 17:21, Per Jessen пишет:
No, harddisks generally have write cache disabled
Consumer hard disks generally have write cache enabled because it is good for benchmarks (and performance as perceived by users). [ 1.894064] sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
They are not specifically made for raid - WDC used to have a series called "RAID edition", but I think they've stopped that.
They usually have different default settings (well known recovery timeout as example) and may have better quality control. HDD write cache is controlled by RAID controllers (which can explicitly enable or disable it or just leave it as is).
28.01.2021 18:20, Andrei Borzenkov пишет:
28.01.2021 17:21, Per Jessen пишет:
No, harddisks generally have write cache disabled
Consumer hard disks generally have write cache enabled because it is good for benchmarks (and performance as perceived by users).
[ 1.894064] sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
And if HDD actually supports FUA, write cache does not matter much - you lose some data (but you can lose some data due to lot of different reasons) but file system remains consistent, because FUA ensures data is flushed to stable storage when needed.
Andrei Borzenkov wrote:
28.01.2021 17:21, Per Jessen пишет:
No, harddisks generally have write cache disabled
Consumer hard disks generally have write cache enabled because it is good for benchmarks (and performance as perceived by users).
Yeah, we did a quick survey here in the office (well, in everyone's home office), and every laptop or desktop has write cache enabled.
[ 1.894064] sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
They are not specifically made for raid - WDC used to have a series called "RAID edition", but I think they've stopped that.
They usually have different default settings (well known recovery timeout as example) and may have better quality control.
We used to buy Hitachi Ultrastar, then WDC RE for a while, but we have long given up and just make sure we buy 24/7 drives. -- Per Jessen, Zürich (10.8°C) http://www.dns24.ch/ - your free DNS host, made in Switzerland.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 El 2021-01-28 a las 18:20 +0300, Andrei Borzenkov escribió:
28.01.2021 17:21, Per Jessen пишет:
No, harddisks generally have write cache disabled
Consumer hard disks generally have write cache enabled because it is good for benchmarks (and performance as perceived by users).
[ 1.894064] sd 1:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
Telcontar:~ # journalctl | grep "Write cache" Jan 15 12:24:21 Telcontar kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 15 12:24:21 Telcontar kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 15 12:24:21 Telcontar kernel: sd 2:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 15 12:24:21 Telcontar kernel: sd 3:0:0:0: [sdd] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 15 23:20:32 Telcontar kernel: sd 10:0:0:0: [sde] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA Jan 20 02:57:15 Telcontar kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 20 02:57:15 Telcontar kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 20 02:57:15 Telcontar kernel: sd 2:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 20 02:57:15 Telcontar kernel: sd 3:0:0:0: [sdd] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 24 04:38:56 Telcontar kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 24 04:38:56 Telcontar kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 24 04:38:56 Telcontar kernel: sd 2:0:0:0: [sdc] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 24 04:38:56 Telcontar kernel: sd 3:0:0:0: [sdd] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA No report on the nvme disk. Curious. sde had it disabled on Jan 15, then no report. Ah, must have been an external stick, I forgot. What are DPO or FUA? Tried googling the first, found "days past ovulation" as first hit. Then looked on wikipedia and saw nothing for computers. Similar bad luck for FUA.
They are not specifically made for raid - WDC used to have a series called "RAID edition", but I think they've stopped that.
They usually have different default settings (well known recovery timeout as example) and may have better quality control. HDD write cache is controlled by RAID controllers (which can explicitly enable or disable it or just leave it as is).
Software raid in Linux doesn't seem to do it. I have one raid 5 partition for playing. - -- Cheers, Carlos E. R. (from openSUSE 15.2 x86_64 at Telcontar) -----BEGIN PGP SIGNATURE----- iHoEARECADoWIQQZEb51mJKK1KpcU/W1MxgcbY1H1QUCYBMJxRwccm9iaW4ubGlz dGFzQHRlbGVmb25pY2EubmV0AAoJELUzGBxtjUfVtIUAn1RzKn4hTBqM1O7gj/KV sNYHTYleAJ9OVFRl5L3z63/jk5wz+GfvkFH5Dg== =Dqzh -----END PGP SIGNATURE-----
28.01.2021 22:00, Carlos E. R. пишет:
What are DPO or FUA?
DPO - Disable Page Out. Flag on SCSI read/write command that indicates device should not store data in its internal cache. FUA - Force Unit Access. Flag on SCSI read/write command that indicates that command must complete using persistent non-volatile storage.
On 29/01/2021 06.08, Andrei Borzenkov wrote:
28.01.2021 22:00, Carlos E. R. пишет:
What are DPO or FUA?
DPO - Disable Page Out. Flag on SCSI read/write command that indicates device should not store data in its internal cache.
FUA - Force Unit Access. Flag on SCSI read/write command that indicates that command must complete using persistent non-volatile storage.
Thanks :-) -- Cheers / Saludos, Carlos E. R. (from 15.2 x86_64 at Telcontar)
On 28/01/2021 15.21, Per Jessen wrote:
Carlos E. R. wrote:
Yeah, you're right, I know there is a setting to enable it. We check every new drive when they are installed, just in case - we have not seen any IBM/Hitachi/Seagate/WDC drive with write cache enabled in at least 10 years. I don't know if we ever have, but we don't keep the test records for longer than that.
Ok, you mean that hard disks that are specifically made for raid usage, have the write cache disabled? Or that you manually disable it?
No, harddisks generally have write cache disabled because it is a potential problem unless you have a UPS. Like I wrote, we check the setting whenever we install or replace a drive. AFAIR, we had to make a small utility, hdparm did not support it. Maybe it does now, I don't know.
Ah. I was trying to find how to enable or disable it in hdparm and couldn't see it.
I don't know, I never purchased disks made for raid.
They are not specifically made for raid - WDC used to have a series called "RAID edition", but I think they've stopped that.
I don't know if the default settings vary depending on whether they are industry or consumer drives, it's possible. I also see write cache enabled on my laptop, for instance.
-- Cheers / Saludos, Carlos E. R. (from 15.2 x86_64 at Telcontar)
Carlos E. R. wrote:
On 28/01/2021 15.21, Per Jessen wrote:
Carlos E. R. wrote:
Yeah, you're right, I know there is a setting to enable it. We check every new drive when they are installed, just in case - we have not seen any IBM/Hitachi/Seagate/WDC drive with write cache enabled in at least 10 years. I don't know if we ever have, but we don't keep the test records for longer than that.
Ok, you mean that hard disks that are specifically made for raid usage, have the write cache disabled? Or that you manually disable it?
No, harddisks generally have write cache disabled because it is a potential problem unless you have a UPS. Like I wrote, we check the setting whenever we install or replace a drive. AFAIR, we had to make a small utility, hdparm did not support it. Maybe it does now, I don't know.
Ah. I was trying to find how to enable or disable it in hdparm and couldn't see it.
It looks like it is the '-W' option now. -- Per Jessen, Zürich (11.8°C) http://www.dns24.ch/ - your free DNS host, made in Switzerland.
On 28/01/2021 20.00, Per Jessen wrote:
Carlos E. R. wrote:
On 28/01/2021 15.21, Per Jessen wrote:
Carlos E. R. wrote:
No, harddisks generally have write cache disabled because it is a potential problem unless you have a UPS. Like I wrote, we check the setting whenever we install or replace a drive. AFAIR, we had to make a small utility, hdparm did not support it. Maybe it does now, I don't know.
Ah. I was trying to find how to enable or disable it in hdparm and couldn't see it.
It looks like it is the '-W' option now.
-W Get/set the IDE/SATA drive´s write-caching feature. Missed it. I greped for "cache" and "buffer". Careful, lower case -w does something very dangerous. -- Cheers / Saludos, Carlos E. R. (from 15.2 x86_64 at Telcontar)
On 1/27/21 3:35 AM, Carlos E. R. wrote:
A question about "battery": is it still an argument if you system is backed up by an UPS? I thought the batteries on controller cards were paramount only on systems that are not protected by a convenient UPS. Wrong? The idea is, I assume, that battery backed hardware raid detects that the power died and commits everything to disk before finally powering down the HDs.
The hardware RAID controllers that we use (3-ware/LSI/Broadcom) have super-capacitors instead of lithium batteries. The battery is used to power cache RAM that holds unwritten data in the event of a system crash. I don't think there's time for the controller to complete writes to disk, it doesn't power the disks after all.
An UPS will keep the entire computer running till it finally gives up, or either the human or software powers it down before the battery runs down.
Not really the same thing, but both avoid disasters.
My experience with UPS may be limited, but back in the day I ran a 10-KVA UPS to power a group of Sun Microsystem servers. It was a nice UPS with a ferroresonant transformer that conditioned power as well as running the 10-KVA load for 30-minutes. In my experience, we experienced more unplanned power outages caused by the UPS itself than if we directly connected to the mains. After we retired the Suns and switched to SuSE I once managed to keep the main server up and running continuously for a bit more than 4-years, without the UPS. Without a reboot! It was a busy server too, with a hardware RAID controller. Regards, Lew
On 27/01/2021 16.39, Lew Wolfgang wrote:
On 1/27/21 3:35 AM, Carlos E. R. wrote:
A question about "battery": is it still an argument if you system is backed up by an UPS? I thought the batteries on controller cards were paramount only on systems that are not protected by a convenient UPS. Wrong? The idea is, I assume, that battery backed hardware raid detects that the power died and commits everything to disk before finally powering down the HDs.
The hardware RAID controllers that we use (3-ware/LSI/Broadcom) have super-capacitors instead of lithium batteries. The battery is used to power cache RAM that holds unwritten data in the event of a system crash. I don't think there's time for the controller to complete writes to disk, it doesn't power the disks after all.
Ah.
An UPS will keep the entire computer running till it finally gives up, or either the human or software powers it down before the battery runs down.
Not really the same thing, but both avoid disasters.
My experience with UPS may be limited, but back in the day I ran a 10-KVA UPS to power a group of Sun Microsystem servers. It was a nice UPS with a ferroresonant transformer that conditioned power as well as running the 10-KVA load for 30-minutes. In my experience, we experienced more unplanned power outages caused by the UPS itself than if we directly connected to the mains. After we retired the Suns and switched to SuSE I once managed to keep the main server up and running continuously for a bit more than 4-years, without the UPS. Without a reboot! It was a busy server too, with a hardware RAID controller.
Do you remember how did the UPS fail? I'm curious. I have a wild guess. I had once to install two sizable UPSs in cascade to power some "dumb terminals and printers". The instructions from upstream management was to put both in active mode, ie, directly and constantly power the terminals from battery, which were independently charged. This mode would age both boxes prematurely and make them fail eventually. So instead I connected (at least one, maybe the two, I don't remember) in passive mode: ie, the terminals were supplied from mains, and fail over fast to battery on power failure. I told my boss later, when I saw him, he agreed. The only advantage I see with their method was that the AC supplied to the terminals would be under our control. But a good UPS will switch over if the mains is bad, either low or high voltage or frequency change, and do it so fast that a computer doesn't crash. -- Cheers / Saludos, Carlos E. R. (from 15.2 x86_64 at Telcontar)
On 1/27/21 1:42 PM, Carlos E. R. wrote:
My experience with UPS may be limited, but back in the day I ran a 10-KVA UPS to power a group of Sun Microsystem servers. It was a nice UPS with a ferroresonant transformer that conditioned power as well as running the 10-KVA load for 30-minutes. In my experience, we experienced more unplanned power outages caused by the UPS itself than if we directly connected to the mains. After we retired the Suns and switched to SuSE I once managed to keep the main server up and running continuously for a bit more than 4-years, without the UPS. Without a reboot! It was a busy server too, with a hardware RAID controller. Do you remember how did the UPS fail? I'm curious.
The UPS never actually failed, except for a time or two when the battery bank needed to be replaced. The unscheduled outages were more due to me fiddling around with it and messing it up. It was a Best Ferrups 10-kva model. Looks like they're still made: https://www.eaton.com/us/en-us/catalog/backup-power-ups-surge-it-power-distr... The battery bank was in a separate container as big as the main unit. They were free-standing, not rack mounted. It would power the load through the saturated transformer and when power dropped it would use the battery dc (switched I guess) to serve as the AC input to the primary. The load would never see a glitch, until the battery bank discharged. The saturated core of the transformer served to regulate and smooth power to the load. Regards, Lew
On 27/01/2021 23.27, Lew Wolfgang wrote:
On 1/27/21 1:42 PM, Carlos E. R. wrote:
My experience with UPS may be limited, but back in the day I ran a 10-KVA UPS to power a group of Sun Microsystem servers. It was a nice UPS with a ferroresonant transformer that conditioned power as well as running the 10-KVA load for 30-minutes. In my experience, we experienced more unplanned power outages caused by the UPS itself than if we directly connected to the mains. After we retired the Suns and switched to SuSE I once managed to keep the main server up and running continuously for a bit more than 4-years, without the UPS. Without a reboot! It was a busy server too, with a hardware RAID controller. Do you remember how did the UPS fail? I'm curious.
The UPS never actually failed, except for a time or two when the battery bank needed to be replaced. The unscheduled outages were more due to me fiddling around with it and messing it up.
:-D When I was installing air conditioning on my "computer room", the technician connected his power drill to the UPS output socket, then looked bewildered when his drill would start for a second then die as soon as he touched the brick wall to work :-D Of course, my computer crashed instantly. He did not think of asking "where should I connect my tools?"
It was a Best Ferrups 10-kva model. Looks like they're still made:
https://www.eaton.com/us/en-us/catalog/backup-power-ups-surge-it-power-distr...
The battery bank was in a separate container as big as the main unit. They were free-standing, not rack mounted. It would power the load through the saturated transformer and when power dropped it would use the battery dc (switched I guess) to serve as the AC input to the primary. The load would never see a glitch, until the battery bank discharged. The saturated core of the transformer served to regulate and smooth power to the load.
Nice. -- Cheers / Saludos, Carlos E. R. (from 15.2 x86_64 at Telcontar)
Lew Wolfgang wrote:
On 1/27/21 1:42 PM, Carlos E. R. wrote:
My experience with UPS may be limited, but back in the day I ran a 10-KVA UPS to power a group of Sun Microsystem servers. It was a nice UPS with a ferroresonant transformer that conditioned power as well as running the 10-KVA load for 30-minutes. In my experience, we experienced more unplanned power outages caused by the UPS itself than if we directly connected to the mains. After we retired the Suns and switched to SuSE I once managed to keep the main server up and running continuously for a bit more than 4-years, without the UPS. Without a reboot! It was a busy server too, with a hardware RAID controller. Do you remember how did the UPS fail? I'm curious.
The UPS never actually failed, except for a time or two when the battery bank needed to be replaced.
Downstairs we have a couple of APC SmartUPS'es - they know very well when the batteries are due :-) They will sound an alarm, send an email and an SNMP alert and keep doing it until the battery "cartridge" is replaced. Once you pull out a cartridge, the entire bank is disabled, which reduces the runtime by 20%. At home, I have a little 6kVA Eaton, it's a little more effort. To swap the batteries, you have to take the front off and switch it into bypass, for instance. With monitoring by NUT, I still get SNMP alerts though - not sure if it actually will complain about batteries, but it does complain if the runtime sinks below a certain threshhold. Still, it is really off-topic. -- Per Jessen, Zürich (3.0°C) http://www.dns24.ch/ - free dynamic DNS, made in Switzerland.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Thursday, 2021-01-28 at 10:42 +0100, Per Jessen wrote:
Lew Wolfgang wrote:
On 1/27/21 1:42 PM, Carlos E. R. wrote:
Do you remember how did the UPS fail? I'm curious.
The UPS never actually failed, except for a time or two when the battery bank needed to be replaced.
Downstairs we have a couple of APC SmartUPS'es - they know very well when the batteries are due :-) They will sound an alarm, send an email and an SNMP alert and keep doing it until the battery "cartridge" is replaced. Once you pull out a cartridge, the entire bank is disabled, which reduces the runtime by 20%.
At home, I have a little 6kVA Eaton, it's a little more effort. To swap the batteries, you have to take the front off and switch it into bypass, for instance. With monitoring by NUT, I still get SNMP alerts though - not sure if it actually will complain about batteries, but it does complain if the runtime sinks below a certain threshhold.
Still, it is really off-topic.
We can talk about how to do the monitoring in openSUSE :-) Sometimes I see an applet on the desktop that says that the UPS battery is fully charged. Currently, on that "server" (yes, it runs XFCE because it does media server duties and I watch videos on it) reports on the battery of the Logitech keyboard, which is a nice new feature. It says it is at 50%. But it does not report on the UPS, which is connected via USB: <0.6> 2021-01-28T14:00:55.544877+01:00 Isengard kernel - - - [401782.992870] usb 1-2.2: USB disconnect, device number 6 <0.6> 2021-01-28T14:00:58.380821+01:00 Isengard kernel - - - [401785.828546] usb 1-2.2: new low-speed USB device number 7 using xhci_hcd <0.6> 2021-01-28T14:00:58.536862+01:00 Isengard kernel - - - [401785.987083] usb 1-2.2: New USB device found, idVendor=0665, idProduct=5161, bcdDevice= 0.02 <0.6> 2021-01-28T14:00:58.536906+01:00 Isengard kernel - - - [401785.987101] usb 1-2.2: New USB device strings: Mfr=1, Product=2, SerialNumber=3 <0.6> 2021-01-28T14:00:58.536910+01:00 Isengard kernel - - - [401785.987113] usb 1-2.2: Product: USB to Serial <0.6> 2021-01-28T14:00:58.536913+01:00 Isengard kernel - - - [401785.987122] usb 1-2.2: Manufacturer: INNO TECH <0.6> 2021-01-28T14:00:58.536915+01:00 Isengard kernel - - - [401785.987132] usb 1-2.2: SerialNumber: 20100826 <0.6> 2021-01-28T14:00:58.548845+01:00 Isengard kernel - - - [401785.997855] hid-generic 0003:0665:5161.0007: hiddev97,hidraw2: USB HID v1.00 Device [INNO TECH USB to Serial] on usb-0000:00:14.0-2.2/input0 <1.6> 2021-01-28T14:00:58.684152+01:00 Isengard mtp-probe - - - checking bus 1, device 7: "/sys/devices/pci0000:00/0000:00:14.0/usb1/1-2/1-2.2" <1.6> 2021-01-28T14:00:58.685372+01:00 Isengard mtp-probe - - - bus: 1, device: 7 was not an MTP device The log reports on the chip that converts from the internal rs232 port to USB, not really on the UPS. I think that on Leap 15.1 an applet reported on the UPS, but not on 15.2. I have on the ToDo list to install "nut" or something to at least monitor and log the UPS status. What is currentlythe Linux software that handle best small UPS devices, via USB cable? - -- Cheers, Carlos E. R. (from openSUSE 15.2 x86_64 at Telcontar) -----BEGIN PGP SIGNATURE----- iHoEARECADoWIQQZEb51mJKK1KpcU/W1MxgcbY1H1QUCYBK3kRwccm9iaW4ubGlz dGFzQHRlbGVmb25pY2EubmV0AAoJELUzGBxtjUfVc2MAoIt1w0jdScAhowvKLj99 jDAGGGR9AJ9BfOUT3SO6YKufsk3E6CE9lkHYvA== =Y66A -----END PGP SIGNATURE-----
On 2021-01-28 8:09 a.m., Carlos E. R. wrote:
We can talk about how to do the monitoring in openSUSE :-)
I have a couple of APC UPSs, one connected to my Linux system and the other, my pfsense firewall, which is built on FreeBSD. Apcupsd and nut both work on Linux, but neither works on FreeBSD, though they did with an older version of the same APC model.
On Thu, 28 Jan 2021, James Knott wrote:
I have a couple of APC UPSs, one connected to my Linux system and the other, my pfsense firewall, which is built on FreeBSD. Apcupsd and nut both work on Linux, but neither works on FreeBSD,
See https://dan.langille.org/2020/09/07/monitoring-your-ups-using-nut-on-freebsd... FreeBSD 12.1 nut 2.7.4 Eaton 5PX: 5PX2200RT – 2U Line Interactive UPS Eaton EBM: 5PXEBM48RT – external battery pack pfSense 2.4.5-RELEASE-p1 (amd64) NUT also works well with openSUSE. Roger
On 2021-01-28 10:13 a.m., Roger Price wrote:
I have a couple of APC UPSs, one connected to my Linux system and the other, my pfsense firewall, which is built on FreeBSD. Apcupsd and nut both work on Linux, but neither works on FreeBSD,
See https://dan.langille.org/2020/09/07/monitoring-your-ups-using-nut-on-freebsd...
FreeBSD 12.1 nut 2.7.4 Eaton 5PX: 5PX2200RT – 2U Line Interactive UPS Eaton EBM: 5PXEBM48RT – external battery pack pfSense 2.4.5-RELEASE-p1 (amd64)
NUT also works well with openSUSE. Roger
I don't have a problem on openSUSE, as both apcupsd and nut work. Both worked on FreeBSD with the older version of APC UPS, but not with the new one. When I was looking into this, I found there was some difference in the protocol used, which FreeBSD hadn't caught up with, but Linux had. I'm not going to toss out perfectly good UPS, just because FreeBSD isn't up to date. Also, I'll be setting up a new system shortly, as the computer I had been running pfsense on died recently. As a replacement, I'm getting a new mini PC to replace the old HP desktop I had been using. There is also a new major version of pfsense coming out soon. Perhaps it will better support the APC UPS. Here's what I've ordered for my new firewall. Mine has an i5 CPU, 4 GB of RAM and 20 GB SSD. My old firewall had an AMD Athlon 3200+ & 4 GB. https://www.aliexpress.com/item/32799580496.html?spm=a2g0o.detail.1000060.1.69f11c510W8MtQ&gps-id=pcDetailBottomMoreThisSeller&scm=1007.13339.169870.0&scm_id=1007.13339.169870.0&scm-url=1007.13339.169870.0&pvid=a9c688d1-a4e6-43a5-8c68-6584ed31929a&_t=gps-id:pcDetailBottomMoreThisSeller,scm-url:1007.13339.169870.0,pvid:a9c688d1-a4e6-43a5-8c68-6584ed31929a,tpp_buckets:668%230%23131923%2356_668%23888%233325%233_668%232846%238115%232000_668%232717%237562%23467_668%231000022185%231000066059%230_668%233468%2315609%23290
Carlos E. R. wrote:
On Thursday, 2021-01-28 at 10:42 +0100, Per Jessen wrote:
Still, it is really off-topic.
We can talk about how to do the monitoring in openSUSE :-)
NUT is the answer.
But it does not report on the UPS, which is connected via USB:
Maybe that app needs something extra, some config in order to talk to that UPS.
What is currently the Linux software that handle best small UPS devices, via USB cable?
NUT :-) -- Per Jessen, Zürich (11.4°C) http://www.dns24.ch/ - free dynamic DNS, made in Switzerland.
On 28/01/2021 15.01, Per Jessen wrote:
Carlos E. R. wrote:
On Thursday, 2021-01-28 at 10:42 +0100, Per Jessen wrote:
Still, it is really off-topic.
We can talk about how to do the monitoring in openSUSE :-)
NUT is the answer.
But it does not report on the UPS, which is connected via USB:
Maybe that app needs something extra, some config in order to talk to that UPS.
What is currently the Linux software that handle best small UPS devices, via USB cable?
NUT :-)
Okey, I will try it, and try to not get nuts on the way :-) -- Cheers / Saludos, Carlos E. R. (from 15.2 x86_64 at Telcontar)
On 1/27/21 1:42 PM, Carlos E. R. wrote: The UPS never actually failed, except for a time or two when the battery bank needed to be replaced.
I've had some of my own UPSs fail after several years. But one time, a few years ago, when I was working for a telecom company. I got a call that their fibre connection had failed. When I got there, the entire 3 cabinet lineup was down, due to a failed UPS. BTW, a "UPS" we had at work many years ago, was an motor turning an alternator and an 8 ton flywheel. When the power failed, a clutch would connect a huge diesel engine to the flywheel to start it and take over the load. When that diesel was running, you could feel it throughout the building. That system also produced power that was slightly lower frequency than the mains due to the induction motor used to run it. IIRC, there were 4 of those systems. Later on, turbines were used.
On 27/01/2021 04.11, David C. Rankin wrote:
On 1/26/21 6:37 AM, Carlos E. R. wrote:
On 26/01/2021 01.27, David C. Rankin wrote:
On 1/25/21 4:07 PM, Markus Egg wrote:
Hello, ...
If this is windows on motherboard raid, it's probably dmraid (otherwise known as Fake RAID or BIOS RAID) -- it's not really Fake, it's just the moniker dmraid ended up with from the hardware RAID snobs...
It is really fake, because it doesn't run in hardware: it runs in software, on the computer CPU, with read support on BIOS so that it can boot. Once booted it gets write code from the driver, running on the mainboard CPU, not on the raid chipset.
A true hardware raid doesn't use the mainboard CPU, and is transparent to the operating system.
I'm sure Neil Brown and the rest on the linux-raid list would be surprised to learn it is fake...
It's software... (and the overhead ceased being measurable when 486 came out)
Fake is far superior to hardware.
Of course it is :-)
Just have a battery die on your hardware card and drop from write-back to write-through... and then find out your battery was discontinued 3 years ago. Now you have a hardware specific RAID install that can no longer benefit from the hardware write-back performance at all... Though, unless you are saturating whatever your setup is -- it really doesn't matter.
You don't need selling me software raid - but I prefer true software raid. The disadvantage is you can not double boot to Windows. -- Cheers / Saludos, Carlos E. R. (from 15.2 x86_64 at Telcontar)
In data mercoledì 27 gennaio 2021 12:43:54 CET, Carlos E. R. ha scritto:
On 27/01/2021 04.11, David C. Rankin wrote:
On 1/26/21 6:37 AM, Carlos E. R. wrote:
On 26/01/2021 01.27, David C. Rankin wrote:
On 1/25/21 4:07 PM, Markus Egg wrote:
Hello,
...
If this is windows on motherboard raid, it's probably dmraid (otherwise known as Fake RAID or BIOS RAID) -- it's not really Fake, it's just the moniker dmraid ended up with from the hardware RAID snobs...
It is really fake, because it doesn't run in hardware: it runs in software, on the computer CPU, with read support on BIOS so that it can boot. Once booted it gets write code from the driver, running on the mainboard CPU, not on the raid chipset.
A true hardware raid doesn't use the mainboard CPU, and is transparent to the operating system.
I'm sure Neil Brown and the rest on the linux-raid list would be surprised to learn it is fake...
It's software... (and the overhead ceased being measurable when 486 came out)
Fake is far superior to hardware.
Of course it is :-)
Just have a battery die on your hardware card and drop from write-back to write-through... and then find out your battery was discontinued 3 years ago. Now you have a hardware specific RAID install that can no longer benefit from the hardware write-back performance at all... Though, unless you are saturating whatever your setup is -- it really doesn't matter.
You don't need selling me software raid - but I prefer true software raid. The disadvantage is you can not double boot to Windows. But can I set up an UEFI system with a mdadm driven root device as RAID1? Not that dramatic with TW as standalone machine nevertheless I wondered.
And, supplement, "provided that I am using only SSD for root. (Talking here about a standard installation with BTRFS, non encrypted and not LVM. So the issues, if there are, you are going to have with mirroring the UEFI part.
Stakanov composed on 2021-01-27 13:01 (UTC+0100):
27 gennaio 2021 12:43:54 CET, Carlos E. R. composed:
...I prefer true software raid. The disadvantage is you can not double boot to Windows.
Oh? e.g.: sda1 ESP sda2 Windows reserved sda3 Windows system sda4 Linux RAID sdb1 Windows data sdb2 Linux RAID
But can I set up an UEFI system with a mdadm driven root device as RAID1? Not that dramatic with TW as standalone machine nevertheless I wondered.
And, supplement, "provided that I am using only SSD for root. (Talking here about a standard installation with BTRFS, non encrypted and not LVM.
I have one PC using UEFI plus RAID. OSes are on the M.2 NVME. Linux RAID data is on a pair of rotating rusts. This PC is similar, OSes are on ordinary MBR SSD, data on MBR RR pair. -- Evolution as taught in public schools, like religion, is based on faith, not on science. Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/
On 27/01/2021 20.18, Felix Miata wrote:
Stakanov composed on 2021-01-27 13:01 (UTC+0100):
27 gennaio 2021 12:43:54 CET, Carlos E. R. composed:
...I prefer true software raid. The disadvantage is you can not double boot to Windows.
Oh? e.g.:
sda1 ESP sda2 Windows reserved sda3 Windows system sda4 Linux RAID sdb1 Windows data sdb2 Linux RAID
Windows is not using raid in that setup... Only Linux is "raided". -- Cheers / Saludos, Carlos E. R. (from 15.2 x86_64 at Telcontar)
Carlos E. R. composed on 2021-01-27 22:51 (UTC+0100):
Felix Miata wrote: ...
...I prefer true software raid. The disadvantage is you can not double boot to Windows.
Oh? e.g.:
sda1 ESP sda2 Windows reserved sda3 Windows system sda4 Linux RAID sdb1 Windows data sdb2 Linux RAID
Windows is not using raid in that setup... Only Linux is "raided".
Sure, but it's still multibooting, and the most important filesystems are on RAID. :D -- Evolution as taught in public schools, like religion, is based on faith, not on science. Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/
On 28/01/2021 02.28, Felix Miata wrote:
Carlos E. R. composed on 2021-01-27 22:51 (UTC+0100):
Felix Miata wrote: ...
...I prefer true software raid. The disadvantage is you can not double boot to Windows.
Oh? e.g.:
sda1 ESP sda2 Windows reserved sda3 Windows system sda4 Linux RAID sdb1 Windows data sdb2 Linux RAID
Windows is not using raid in that setup... Only Linux is "raided".
Sure, but it's still multibooting, and the most important filesystems are on RAID.
:D
Doesn't count. You must have both on raid to fulfill the conditions. This is cheating on the test. -- Cheers / Saludos, Carlos E. R. (from 15.2 x86_64 at Telcontar)
Am 27/01/2021 um 20:18 schrieb Felix Miata:
Stakanov composed on 2021-01-27 13:01 (UTC+0100):
27 gennaio 2021 12:43:54 CET, Carlos E. R. composed:
...I prefer true software raid. The disadvantage is you can not double boot to Windows.
Oh? e.g.:
sda1 ESP sda2 Windows reserved sda3 Windows system sda4 Linux RAID sdb1 Windows data sdb2 Linux RAID
But can I set up an UEFI system with a mdadm driven root device as RAID1? Not that dramatic with TW as standalone machine nevertheless I wondered.
And, supplement, "provided that I am using only SSD for root. (Talking here about a standard installation with BTRFS, non encrypted and not LVM.
I have one PC using UEFI plus RAID. OSes are on the M.2 NVME. Linux RAID data is on a pair of rotating rusts. This PC is similar, OSes are on ordinary MBR SSD, data on MBR RR pair.
My system somehow looks similar, just without NVME and SSD instead, ext4 instead of BTRFS. Thanks for all the answers regarding raid and fake raid. But actually there was no answer to my question about a proper Tumbleweed Snapshot. ;-) I tried e.g. Snapshot20210121 at that time also- no raid. At that point I decided to get back to SuSE Leap 15.2 which can handle this raid. Maybe future tumbleweed snapshots can handle such a raid again? BR
On 15/09/2021 21.12, Markus Egg wrote: ...
At that point I decided to get back to SuSE Leap 15.2 which can handle this raid. Maybe future tumbleweed snapshots can handle such a raid again?
For that, you have to make a Bugzilla report against the current TW release. otherwise, chances tend to nil. -- Cheers / Saludos, Carlos E. R. (from oS Leap 15.2 x86_64 (Minas Tirith))
Markus Egg composed on 2021-09-15 15:12 (UTC-0400):
Felix Miata composed on 2021-01-27 14:18 (UTC-0500):
I have one PC using UEFI plus RAID. OSes are on the M.2 NVME. Linux RAID data is on a pair of rotating rusts. This PC is similar, OSes are on ordinary MBR SSD, data on MBR RR pair.
My system somehow looks similar, just without NVME and SSD instead, ext4 instead of BTRFS.
Thanks for all the answers regarding raid and fake raid.
But actually there was no answer to my question about a proper Tumbleweed Snapshot. ;-) I tried e.g. Snapshot20210121 at that time also- no raid.
At that point I decided to get back to SuSE Leap 15.2 which can handle this raid. Maybe future tumbleweed snapshots can handle such a raid again?
What's the problem now? TW works fine with software RAID here: # inxi -Sy System: Host: gb250 Kernel: 5.13.12-1-default x86_64 bits: 64 Desktop: KDE 3.5.10 Distro: openSUSE Tumbleweed 20210902 # alias | grep Mnt alias Mnt='mount | egrep -v "cgroup|rpc|tmpfs|^sys|on /dev|on /proc|on /sys|on /var" | sort ' # Mnt /dev/md3 on / type ext4 (rw,noatime) /dev/md5 on /srv type ext4 (rw,noatime) /dev/md6 on /usr/local type ext4 (rw,noatime) /dev/md7 on /home type ext4 (rw,noatime) /dev/nvme0n1p3 on /boot type ext2 (rw,noatime,noacl) # df -h Filesystem Size Used Avail Use% Mounted on /dev/md3 18G 9.3G 7.1G 57% / /dev/nvme0n1p3 388M 167M 221M 44% /boot /dev/md6 1.9G 1.8G 155M 92% /usr/local /dev/md5 3.8G 3.0G 767M 80% /srv /dev/md7 145G 42G 102G 30% /home # parted -l | egrep 'Table|Model' Model: ATA ST1000DM003-1CH1 (scsi) Partition Table: msdos Model: ATA ST1000DM003-1CH1 (scsi) Partition Table: msdos Model: Linux Software RAID Array (md) Partition Table: loop Model: MKNSSDPL120GB-D8 (nvme) Partition Table: gpt Model: Linux Software RAID Array (md) Partition Table: loop Model: Linux Software RAID Array (md) Partition Table: msdos Model: Linux Software RAID Array (md) Partition Table: loop Model: Linux Software RAID Array (md) Partition Table: loop Model: Linux Software RAID Array (md) Partition Table: loop Model: Linux Software RAID Array (md) Partition Table: loop Model: Linux Software RAID Array (md) Partition Table: loop Model: Linux Software RAID Array (md) Partition Table: loop Model: Linux Software RAID Array (md) Partition Table: loop -- Evolution as taught in public schools is, like religion, based on faith, not based on science. Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata
On 27/01/2021 13.01, Stakanov wrote:
In data mercoledì 27 gennaio 2021 12:43:54 CET, Carlos E. R. ha scritto:
On 27/01/2021 04.11, David C. Rankin wrote:
On 1/26/21 6:37 AM, Carlos E. R. wrote:
On 26/01/2021 01.27, David C. Rankin wrote:
...
Fake is far superior to hardware.
Of course it is :-)
Just have a battery die on your hardware card and drop from write-back to write-through... and then find out your battery was discontinued 3 years ago. Now you have a hardware specific RAID install that can no longer benefit from the hardware write-back performance at all... Though, unless you are saturating whatever your setup is -- it really doesn't matter.
You don't need selling me software raid - but I prefer true software raid. The disadvantage is you can not double boot to Windows. But can I set up an UEFI system with a mdadm driven root device as RAID1? Not that dramatic with TW as standalone machine nevertheless I wondered.
Theoretically, with Windows you should be able to boot just fine, it sees only "one" disk. The firmware should provide read access to the disk, including of course reading the duplicated EFI partition. After booting, the windows driver would add write support. Linux may do the same if that particular firmware is supported. But don't ask me, I never dared to try.
And, supplement, "provided that I am using only SSD for root. (Talking here about a standard installation with BTRFS, non encrypted and not LVM.
So the issues, if there are, you are going to have with mirroring the UEFI part.
I don't see why - talking of fake raid. -- Cheers / Saludos, Carlos E. R. (from 15.2 x86_64 at Telcontar)
On 2021/01/27 03:43, Carlos E. R. wrote:
On 27/01/2021 04.11, David C. Rankin wrote:
On 1/26/21 6:37 AM, Carlos E. R. wrote:
On 26/01/2021 01.27, David C. Rankin wrote:
If this is windows on motherboard raid, it's probably dmraid (otherwise known as Fake RAID or BIOS RAID) -- it's not really Fake, it's just the moniker dmraid ended up with from the hardware RAID snobs...
It is really fake, because it doesn't run in hardware: it runs in software, on the computer CPU, with read support on BIOS so that it can boot. Once booted it gets write code from the driver, running on the mainboard CPU, not on the raid chipset.
A true hardware raid doesn't use the mainboard CPU, and is transparent to the operating system.
I'm sure Neil Brown and the rest on the linux-raid list would be surprised to learn it is fake...
It's software... (and the overhead ceased being measurable when 486 came out) Fake is far superior to hardware.
Of course it is :-)
Just have a battery die on your hardware card and drop from write-back to write-through... and then find out your battery was discontinued 3 years ago.
Nothing like people overgeneralizing. 1) Fake Raid -- Since it's not called Fake raid by the OEM's its hard to really say what you are talking about, but Dell ships a BIOS-FIRMware operated RAID though it doesn't operate in all the modes of their HW solutions. But RAID0 and RAID1 are fairly trivial to do, though don't know about combo RAID10(0+1) being supported. Their RAID is supported by pre-OS BIOS HW, so it works with Linux, Windows or whatever. It just looks like an oversized HD to OS's. 2) Whether or not something is better depends on your usage, and the type of RAID you are using. As far as reliability goes, I've had the _dated_ experience of linux kernel crashes back before it was fully SMP and 2 cores weren't as fast as a single same-clock CPU for many peak-speed related tasks, though they were usually able to process more work due to the multitasking nature of most loads. But back in that timeframe, I had the experience more than once (twice) in their 1st year of use of my software RAID5 (linux MD) disks becoming corrupt and unrecoverable before I switched to HW RAID. I had had HWRAID fail once in the following 2 decades due to a "re-manufactured" LSI card that had the heat-sink super-glued on (as I later found out) rather than connected/held using 4 screws with stiff springs + thermal paste as it came new. Problem there, was that I didn't know what a new card was supposed to look like, and I've seen enough motherboards+cards where random chips were epoxied onto the MB with opaque epoxy to prevent reading details from the card or removing the chip in a recoverable fashion to know what was supposed to be spring mounted vs. epoxied-to-prevent tampering. I also found another good difference that made a huge difference in speed between SW+HW raid setups that I used. The SW raids would be very tolerant of diskspeed differences between different disks, but that also meant that striped access didn't measure up to performance. A SW RAID5, with 4 data disks, ran at about the speed of 2-3 single disks in writing & reading. The same disks put in a HW RAID showed that about 9-10 out of 12 were measured as "bad" when attached to a HW RAID card. The reason: they were Deskstars, sold for the home market rather than Ultrastars sold for enterprise. The deskstars varied in speed from the stated 7200 RPMs by as much as 15%, with about 9/12 disks failing due to speed variance. Ultrastars, at the time rand about 33% more for same size, but were within about 1-2% of each other in speed. Second big area of difference -- HW cards can do their own checksumming for RAID5/50/6/60. Beefier cards will have dual cpu's on the RAID card and performed noticeably better on RAID6/RAID60 configs and slightly better than on RAID5/RAID50. Battery backed RAM can allow averaging out write-bursts and higher I/O-ops for greater parallel usage by doing write-back and buffering write-bursts than RAM that is used in write through mode. RAM in the card (or somewhere) is still needed to calculate parity stripes in RAID5+6 modes. It is likely that at least 1 stripe's width is used in card-memory so a full stripe can be written in parallel to each disk. I'm sorta guessing, but RAID0, RAID1 and RAID10 (stripe of mirrors) can keep data in a write buff for the least time since no calculations need be done, but basically, a HW RAID card can abstract out parallel writes from the OS-CPU that can appear to write multiple data disks in the time the OS would normally be able to write 1. Anyway, it hasn't been my experience that SW raid is better than HW raid, but that may be due, in part, to use a common RAID card in my setups (LSI->Avago->Broadcom). FWIW, though 1 PCIe-SSD may well outperform many RAID setups in single-user tasks, with large RAID's using 2.5" disks possibly benefitting needs of DB-users and web-hosting.
participants (14)
-
Andrei Borzenkov
-
Carlos E. R.
-
Carlos E.R.
-
David C. Rankin
-
Felix Miata
-
James Knott
-
L A Walsh
-
Lew Wolfgang
-
Markus Egg
-
Markus Egg
-
Per Jessen
-
Per Jessen
-
Roger Price
-
Stakanov