SuSE 9.1 on MSI K8T Master-FAR SATA disk i/o problem
Grüss, I have 4 of these systems that will go into production soon. 3 of them are exactly the same, while the 4th has different hard drives. All of them are experiencing the same problem. I have gone from FreeBSD 5.2.1 to Mandrake 10 (amd64 RC1) to SuSE 9.1 Professional (DVD box set) in search of something that will actually work and perform well. I just used YaST to download the most recent kernel patches, so I'm now at 2.6.4-54-5-smp. Here is my configuration: MSI K8T Master2-FAR 2x Opteron 242 2x 512MB OCZ DDR400 RAM ("Specially Engineered and Optimized for the AMD Athlon64 FX platform") 2x WD Raptor 36GB (also WD 160GB SATA drives) 52x IDE CD-ROM 360 Watt Thermaltake PS These will be servers, so they have no video cards in them (therefore no X etc.). I used an old ViRGE PCI card to do the install. (ncurses based YaST could use some work BTW, particularly in setting up the partitions) I have a fresh install of SuSE 9.1 Professional, installed off of the DVD in the box set. I had to disable acpi in order to install, and naturally I went with the Linux software RAID to raid the disks. Watching top is very interesting. Right before a crash, the wa (I/O wait) CPU % goes way up, 50% or more. Then the system becomes unresponsive. I've searched the 'net and this list and still haven't come up with a thorough prescription for how to fix my problem. I seemed to have made some progress after enabling the libata driver, as the test machine lasted a lot longer this time before failing. I've got INITRD_MODULES="via_sata libata reiserfs raid1" in /etc/sysconfig/kernel and append = "resume=/dev/hda11 splash=silent acpi=off apm=off console=tty0" in /etc/lilo.conf. I'm not sure if the libata module "took"; someone else said that I needed to see my SATA drives as sd# rather than hd#. Well, in the boot messages the drives were still called hda and hdc. I am unfamiliar with the entire kernel rebuilding process in Linux, and with how the modules work. Mostly I just want to hear from somebody else who has this motherboard, exactly what they did to get everything working, because I've tried everything I'm capable of doing. I've found some helpful things so far, but I haven't yet hit the magic bullet. Of course I can put up any further needed information. I've just got to get these machines working! Thank you, Brock Witherspoon
I have this motherboard with two Opteron 244s and have it working fine with SUSE 9.1 and Gentoo. However, I am not using SATA. I am using a PATA WD1200JB-75CRA0. Your problem is definitely due to SATA; so concentrate your efforts there. Oh, my memory is 2x512MB Corsair 3200LL running at DDR400. On Wed, Jun 16, 2004 at 11:57:09AM -0500, Brock Witherspoon wrote:
Gr?ss,
I have 4 of these systems that will go into production soon. 3 of them are exactly the same, while the 4th has different hard drives. All of them are experiencing the same problem. I have gone from FreeBSD 5.2.1 to Mandrake 10 (amd64 RC1) to SuSE 9.1 Professional (DVD box set) in search of something that will actually work and perform well. I just used YaST to download the most recent kernel patches, so I'm now at 2.6.4-54-5-smp.
Here is my configuration:
MSI K8T Master2-FAR 2x Opteron 242 2x 512MB OCZ DDR400 RAM ("Specially Engineered and Optimized for the AMD Athlon64 FX platform") 2x WD Raptor 36GB (also WD 160GB SATA drives) 52x IDE CD-ROM 360 Watt Thermaltake PS
These will be servers, so they have no video cards in them (therefore no X etc.). I used an old ViRGE PCI card to do the install. (ncurses based YaST could use some work BTW, particularly in setting up the partitions)
I have a fresh install of SuSE 9.1 Professional, installed off of the DVD in the box set. I had to disable acpi in order to install, and naturally I went with the Linux software RAID to raid the disks.
Watching top is very interesting. Right before a crash, the wa (I/O wait) CPU % goes way up, 50% or more. Then the system becomes unresponsive.
I've searched the 'net and this list and still haven't come up with a thorough prescription for how to fix my problem. I seemed to have made some progress after enabling the libata driver, as the test machine lasted a lot longer this time before failing.
I've got INITRD_MODULES="via_sata libata reiserfs raid1" in /etc/sysconfig/kernel and append = "resume=/dev/hda11 splash=silent acpi=off apm=off console=tty0" in /etc/lilo.conf.
I'm not sure if the libata module "took"; someone else said that I needed to see my SATA drives as sd# rather than hd#. Well, in the boot messages the drives were still called hda and hdc. I am unfamiliar with the entire kernel rebuilding process in Linux, and with how the modules work.
Mostly I just want to hear from somebody else who has this motherboard, exactly what they did to get everything working, because I've tried everything I'm capable of doing. I've found some helpful things so far, but I haven't yet hit the magic bullet.
Of course I can put up any further needed information. I've just got to get these machines working!
Thank you,
Brock Witherspoon
-- Brian Hall Linux Consultant http://pcisys.net/~brihall
Kia Ora, I have the same system a couple of problems I noticed with your setup. 1) Your power supply is far too weak... especially with 2 proc in you need at LEAST 26Amps on the 12+ volt power rail. (I doubt you 360 watt is anywhere near this). Use a Toppower 450Watt or Enermax PSU. 360W is definitely too thin. 2) The OCZ ram you have will not work with this board. OCZ is designed for Single proc performance level enthusiast boards, and has very tweaked and high CAS timmings. This board is a server board, and does not work well (at all...) with the OCZ pc3200 ecc reged ram. I made the same mistake when first purchacing and got... many...many problems. I have a number of threads open with OCZ/MSI/2cpu forums on this issue... Scrap the OCZ ram. Get Samsung pc400 ecc ram. I did and I get MUCH MUCH more stabler operation (samsung adheres to jdec standards and therefore is cas 3 at pc3200 not cas 2 like corsair/mushkin/ocz) 3) The Raid on this MOBO is software only.... Disable all arrays through the vt8xxx bios util (hit tab at startup). Then use Linux software raid through yast to setup the array. I get Better performance using linux software raid than I do from the Via windows driver designed by the manufacturer... (I use 2 hitachi 160gbs drives, with raptors should should get even better). Hope this has been informative. Kind regards Joel New Zealand On Thu, 2004-06-17 at 04:57, Brock Witherspoon wrote:
Grüss,
I have 4 of these systems that will go into production soon. 3 of them are exactly the same, while the 4th has different hard drives. All of them are experiencing the same problem. I have gone from FreeBSD 5.2.1 to Mandrake 10 (amd64 RC1) to SuSE 9.1 Professional (DVD box set) in search of something that will actually work and perform well. I just used YaST to download the most recent kernel patches, so I'm now at 2.6.4-54-5-smp.
Here is my configuration:
MSI K8T Master2-FAR 2x Opteron 242 2x 512MB OCZ DDR400 RAM ("Specially Engineered and Optimized for the AMD Athlon64 FX platform") 2x WD Raptor 36GB (also WD 160GB SATA drives) 52x IDE CD-ROM 360 Watt Thermaltake PS
These will be servers, so they have no video cards in them (therefore no X etc.). I used an old ViRGE PCI card to do the install. (ncurses based YaST could use some work BTW, particularly in setting up the partitions)
I have a fresh install of SuSE 9.1 Professional, installed off of the DVD in the box set. I had to disable acpi in order to install, and naturally I went with the Linux software RAID to raid the disks.
Watching top is very interesting. Right before a crash, the wa (I/O wait) CPU % goes way up, 50% or more. Then the system becomes unresponsive.
I've searched the 'net and this list and still haven't come up with a thorough prescription for how to fix my problem. I seemed to have made some progress after enabling the libata driver, as the test machine lasted a lot longer this time before failing.
I've got INITRD_MODULES="via_sata libata reiserfs raid1" in /etc/sysconfig/kernel and append = "resume=/dev/hda11 splash=silent acpi=off apm=off console=tty0" in /etc/lilo.conf.
I'm not sure if the libata module "took"; someone else said that I needed to see my SATA drives as sd# rather than hd#. Well, in the boot messages the drives were still called hda and hdc. I am unfamiliar with the entire kernel rebuilding process in Linux, and with how the modules work.
Mostly I just want to hear from somebody else who has this motherboard, exactly what they did to get everything working, because I've tried everything I'm capable of doing. I've found some helpful things so far, but I haven't yet hit the magic bullet.
Of course I can put up any further needed information. I've just got to get these machines working!
Thank you,
Brock Witherspoon
Joel Wiramu Pauling wrote:
Kia Ora,
I have the same system a couple of problems I noticed with your setup.
1) Your power supply is far too weak... especially with 2 proc in you need at LEAST 26Amps on the 12+ volt power rail. (I doubt you 360 watt is anywhere near this). Use a Toppower 450Watt or Enermax PSU. 360W is definitely too thin.
2) The OCZ ram you have will not work with this board. OCZ is designed for Single proc performance level enthusiast boards, and has very tweaked and high CAS timmings. This board is a server board, and does not work well (at all...) with the OCZ pc3200 ecc reged ram. I made the same mistake when first purchacing and got... many...many problems. I have a number of threads open with OCZ/MSI/2cpu forums on this issue... Scrap the OCZ ram. Get Samsung pc400 ecc ram. I did and I get MUCH MUCH more stabler operation (samsung adheres to jdec standards and therefore is cas 3 at pc3200 not cas 2 like corsair/mushkin/ocz)
3) The Raid on this MOBO is software only.... Disable all arrays through the vt8xxx bios util (hit tab at startup). Then use Linux software raid through yast to setup the array. I get Better performance using linux software raid than I do from the Via windows driver designed by the manufacturer... (I use 2 hitachi 160gbs drives, with raptors should should get even better).
Hope this has been informative.
Kind regards
Joel
New Zealand
On Thu, 2004-06-17 at 04:57, Brock Witherspoon wrote:
Grüss,
I have 4 of these systems that will go into production soon. 3 of them are exactly the same, while the 4th has different hard drives. All of them are experiencing the same problem. I have gone from FreeBSD 5.2.1 to Mandrake 10 (amd64 RC1) to SuSE 9.1 Professional (DVD box set) in search of something that will actually work and perform well. I just used YaST to download the most recent kernel patches, so I'm now at 2.6.4-54-5-smp.
Here is my configuration:
MSI K8T Master2-FAR 2x Opteron 242 2x 512MB OCZ DDR400 RAM ("Specially Engineered and Optimized for the AMD Athlon64 FX platform") 2x WD Raptor 36GB (also WD 160GB SATA drives) 52x IDE CD-ROM 360 Watt Thermaltake PS
These will be servers, so they have no video cards in them (therefore no X etc.). I used an old ViRGE PCI card to do the install. (ncurses based YaST could use some work BTW, particularly in setting up the partitions)
I have a fresh install of SuSE 9.1 Professional, installed off of the DVD in the box set. I had to disable acpi in order to install, and naturally I went with the Linux software RAID to raid the disks.
Watching top is very interesting. Right before a crash, the wa (I/O wait) CPU % goes way up, 50% or more. Then the system becomes unresponsive.
I've searched the 'net and this list and still haven't come up with a thorough prescription for how to fix my problem. I seemed to have made some progress after enabling the libata driver, as the test machine lasted a lot longer this time before failing.
I've got INITRD_MODULES="via_sata libata reiserfs raid1" in /etc/sysconfig/kernel and append = "resume=/dev/hda11 splash=silent acpi=off apm=off console=tty0" in /etc/lilo.conf.
I'm not sure if the libata module "took"; someone else said that I needed to see my SATA drives as sd# rather than hd#. Well, in the boot messages the drives were still called hda and hdc. I am unfamiliar with the entire kernel rebuilding process in Linux, and with how the modules work.
Mostly I just want to hear from somebody else who has this motherboard, exactly what they did to get everything working, because I've tried everything I'm capable of doing. I've found some helpful things so far, but I haven't yet hit the magic bullet.
Of course I can put up any further needed information. I've just got to get these machines working!
Thank you,
Brock Witherspoon
Hi all, A question about this software raid: is it really worth it? I mean for a server purpose (stability, reliability...)? Ahead of mid-range SCSI? I suppose dual boot is no more possible, but with Win32 emulation, has someone tried? I suppose also that for a bootable array it still needs to be PATA? Thanks for the precisions, hope someone will propose me a link to an up-to-date guide for soft raid under Linux. Sincerely, Eric Laruelle
Well I have a small (500Mb x 2) partition on each drive which is not part of the linux LVM raid partition.... specifically for swap and /boot... As for the worth it? Well if you want fast performance raid 0 with sata is faster than most mid range scsi as you put it. In fact you would have to be talking about scsi arrays to be able to beat sata for performance when it's setup correct... I won't vouch for raid 1 or 5 as I'm not using it for that purpose. As for the win32 emulation? I have no idea what your on about? SATA (at least with the vt8xxx chips) appears to linux as a standard ide drive... which is pretty much what it is with a beefed up interface... The kernel module that runs it is pretty mature now... I have no idea where your win32 idea is comming from? Can you clarify? Like I said because this drive is software raid even in windows with the via provided driver, it's just doing the same thing linux does... except you have to have the driver installed. Kind regards Joel On Thu, 2004-06-17 at 07:49, Eric Laruelle wrote:
Joel Wiramu Pauling wrote:
Kia Ora,
I have the same system a couple of problems I noticed with your setup.
1) Your power supply is far too weak... especially with 2 proc in you need at LEAST 26Amps on the 12+ volt power rail. (I doubt you 360 watt is anywhere near this). Use a Toppower 450Watt or Enermax PSU. 360W is definitely too thin.
2) The OCZ ram you have will not work with this board. OCZ is designed for Single proc performance level enthusiast boards, and has very tweaked and high CAS timmings. This board is a server board, and does not work well (at all...) with the OCZ pc3200 ecc reged ram. I made the same mistake when first purchacing and got... many...many problems. I have a number of threads open with OCZ/MSI/2cpu forums on this issue... Scrap the OCZ ram. Get Samsung pc400 ecc ram. I did and I get MUCH MUCH more stabler operation (samsung adheres to jdec standards and therefore is cas 3 at pc3200 not cas 2 like corsair/mushkin/ocz)
3) The Raid on this MOBO is software only.... Disable all arrays through the vt8xxx bios util (hit tab at startup). Then use Linux software raid through yast to setup the array. I get Better performance using linux software raid than I do from the Via windows driver designed by the manufacturer... (I use 2 hitachi 160gbs drives, with raptors should should get even better).
Hope this has been informative.
Kind regards
Joel
New Zealand
On Thu, 2004-06-17 at 04:57, Brock Witherspoon wrote:
Grüss,
I have 4 of these systems that will go into production soon. 3 of them are exactly the same, while the 4th has different hard drives. All of them are experiencing the same problem. I have gone from FreeBSD 5.2.1 to Mandrake 10 (amd64 RC1) to SuSE 9.1 Professional (DVD box set) in search of something that will actually work and perform well. I just used YaST to download the most recent kernel patches, so I'm now at 2.6.4-54-5-smp.
Here is my configuration:
MSI K8T Master2-FAR 2x Opteron 242 2x 512MB OCZ DDR400 RAM ("Specially Engineered and Optimized for the AMD Athlon64 FX platform") 2x WD Raptor 36GB (also WD 160GB SATA drives) 52x IDE CD-ROM 360 Watt Thermaltake PS
These will be servers, so they have no video cards in them (therefore no X etc.). I used an old ViRGE PCI card to do the install. (ncurses based YaST could use some work BTW, particularly in setting up the partitions)
I have a fresh install of SuSE 9.1 Professional, installed off of the DVD in the box set. I had to disable acpi in order to install, and naturally I went with the Linux software RAID to raid the disks.
Watching top is very interesting. Right before a crash, the wa (I/O wait) CPU % goes way up, 50% or more. Then the system becomes unresponsive.
I've searched the 'net and this list and still haven't come up with a thorough prescription for how to fix my problem. I seemed to have made some progress after enabling the libata driver, as the test machine lasted a lot longer this time before failing.
I've got INITRD_MODULES="via_sata libata reiserfs raid1" in /etc/sysconfig/kernel and append = "resume=/dev/hda11 splash=silent acpi=off apm=off console=tty0" in /etc/lilo.conf.
I'm not sure if the libata module "took"; someone else said that I needed to see my SATA drives as sd# rather than hd#. Well, in the boot messages the drives were still called hda and hdc. I am unfamiliar with the entire kernel rebuilding process in Linux, and with how the modules work.
Mostly I just want to hear from somebody else who has this motherboard, exactly what they did to get everything working, because I've tried everything I'm capable of doing. I've found some helpful things so far, but I haven't yet hit the magic bullet.
Of course I can put up any further needed information. I've just got to get these machines working!
Thank you,
Brock Witherspoon
Hi all,
A question about this software raid: is it really worth it? I mean for a server purpose (stability, reliability...)? Ahead of mid-range SCSI? I suppose dual boot is no more possible, but with Win32 emulation, has someone tried? I suppose also that for a bootable array it still needs to be PATA?
Thanks for the precisions, hope someone will propose me a link to an up-to-date guide for soft raid under Linux.
Sincerely,
Eric Laruelle
Joel Wiramu Pauling wrote:
Well I have a small (500Mb x 2) partition on each drive which is not part of the linux LVM raid partition.... specifically for swap and /boot...
As for the worth it? Well if you want fast performance raid 0 with sata is faster than most mid range scsi as you put it. In fact you would have to be talking about scsi arrays to be able to beat sata for performance when it's setup correct... I won't vouch for raid 1 or 5 as I'm not using it for that purpose.
As for the win32 emulation? I have no idea what your on about? SATA (at least with the vt8xxx chips) appears to linux as a standard ide drive... which is pretty much what it is with a beefed up interface... The kernel module that runs it is pretty mature now... I have no idea where your win32 idea is comming from? Can you clarify?
Like I said because this drive is software raid even in windows with the via provided driver, it's just doing the same thing linux does... except you have to have the driver installed.
Kind regards
Joel
On Thu, 2004-06-17 at 07:49, Eric Laruelle wrote:
Joel Wiramu Pauling wrote:
Kia Ora,
I have the same system a couple of problems I noticed with your setup.
1) Your power supply is far too weak... especially with 2 proc in you need at LEAST 26Amps on the 12+ volt power rail. (I doubt you 360 watt is anywhere near this). Use a Toppower 450Watt or Enermax PSU. 360W is definitely too thin.
2) The OCZ ram you have will not work with this board. OCZ is designed for Single proc performance level enthusiast boards, and has very tweaked and high CAS timmings. This board is a server board, and does not work well (at all...) with the OCZ pc3200 ecc reged ram. I made the same mistake when first purchacing and got... many...many problems. I have a number of threads open with OCZ/MSI/2cpu forums on this issue... Scrap the OCZ ram. Get Samsung pc400 ecc ram. I did and I get MUCH MUCH more stabler operation (samsung adheres to jdec standards and therefore is cas 3 at pc3200 not cas 2 like corsair/mushkin/ocz)
3) The Raid on this MOBO is software only.... Disable all arrays through the vt8xxx bios util (hit tab at startup). Then use Linux software raid through yast to setup the array. I get Better performance using linux software raid than I do from the Via windows driver designed by the manufacturer... (I use 2 hitachi 160gbs drives, with raptors should should get even better).
Hope this has been informative.
Kind regards
Joel
New Zealand
On Thu, 2004-06-17 at 04:57, Brock Witherspoon wrote:
Grüss,
I have 4 of these systems that will go into production soon. 3 of them are exactly the same, while the 4th has different hard drives. All of them are experiencing the same problem. I have gone from FreeBSD 5.2.1 to Mandrake 10 (amd64 RC1) to SuSE 9.1 Professional (DVD box set) in search of something that will actually work and perform well. I just used YaST to download the most recent kernel patches, so I'm now at 2.6.4-54-5-smp.
Here is my configuration:
MSI K8T Master2-FAR 2x Opteron 242 2x 512MB OCZ DDR400 RAM ("Specially Engineered and Optimized for the AMD Athlon64 FX platform") 2x WD Raptor 36GB (also WD 160GB SATA drives) 52x IDE CD-ROM 360 Watt Thermaltake PS
These will be servers, so they have no video cards in them (therefore no X etc.). I used an old ViRGE PCI card to do the install. (ncurses based YaST could use some work BTW, particularly in setting up the partitions)
I have a fresh install of SuSE 9.1 Professional, installed off of the DVD in the box set. I had to disable acpi in order to install, and naturally I went with the Linux software RAID to raid the disks.
Watching top is very interesting. Right before a crash, the wa (I/O wait) CPU % goes way up, 50% or more. Then the system becomes unresponsive.
I've searched the 'net and this list and still haven't come up with a thorough prescription for how to fix my problem. I seemed to have made some progress after enabling the libata driver, as the test machine lasted a lot longer this time before failing.
I've got INITRD_MODULES="via_sata libata reiserfs raid1" in /etc/sysconfig/kernel and append = "resume=/dev/hda11 splash=silent acpi=off apm=off console=tty0" in /etc/lilo.conf.
I'm not sure if the libata module "took"; someone else said that I needed to see my SATA drives as sd# rather than hd#. Well, in the boot messages the drives were still called hda and hdc. I am unfamiliar with the entire kernel rebuilding process in Linux, and with how the modules work.
Mostly I just want to hear from somebody else who has this motherboard, exactly what they did to get everything working, because I've tried everything I'm capable of doing. I've found some helpful things so far, but I haven't yet hit the magic bullet.
Of course I can put up any further needed information. I've just got to get these machines working!
Thank you,
Brock Witherspoon
Hi all,
A question about this software raid: is it really worth it? I mean for a server purpose (stability, reliability...)? Ahead of mid-range SCSI? I suppose dual boot is no more possible, but with Win32 emulation, has someone tried? I suppose also that for a bootable array it still needs to be PATA?
Thanks for the precisions, hope someone will propose me a link to an up-to-date guide for soft raid under Linux.
Sincerely,
Eric
Hi, Are you meaning that you have two SATA hard drives connected to your AMD64 box, without encountering problems? I did have data losts previously, which made me move to SCSI under Linux, and I heard that SATA port was not yet rock-solid. For a server it is a little bit... risky. Moreover I remember all that benchmarks and stress tests I had to run in order to configure hdparm on EIDE drives. Even if SuSE is one of the best distribution for default hard disk parameters, it's still far from transparent. Let me add that SCSI behaves like a charm with multiple I/O, which is not the case with EIDE. Has anybody benchmarks with LVM raid for a EIDE/SATA skeptical mind? I'm sorry but all those SR reviews with wonderful results on sequential tests don't reflect MY reality. But please help me change my mind if I'm wrong today. The questions are: does SATA definitively works with SuSE 9.1 and AMD64? In software SATA raid arrays? From the very first install, off of the DVD? Is the /boot partition necessary in this case? Thanks for all, I'm already grabbing info about LVM. Eric
On Thu, 17 Jun 2004 07:41:08 +1200 Joel Wiramu Pauling <aenertia@aenertia.net> wrote:
1) Your power supply is far too weak... especially with 2 proc in you need at LEAST 26Amps on the 12+ volt power rail. (I doubt you 360 watt is anywhere near this). Use a Toppower 450Watt or Enermax PSU. 360W is definitely too thin.
Yes, I have a Tagan 480Watt PSU - Richard
Hi, I did not see this question until now, but I am sending this in case you still haven't found the problem. If the problem you are experiencing are producing dma_timer_expiry messages (check with dmesg) then you might just be seeing the same problem I did a few months ago. Check this lists archives for more details on that. In short, I found comments on the net where there were suspicions on probems regarding simultaneous access to DMA on IDE channels on dual CPU machines. I then started on a quest to try and force my system to use the libata drivers instead of the default IDE driver - finally I got a kernel configuration that accomplished this and since then all my DMA problems have disappeared - no problems at all. By using the libata driver, it seems that the simultaneous DMA access is avoided. I have not checked the net for more comments on this for some time now so there might be more to find. I've asked around at a few places (including here I think) whether any kernel-gurus might now if there is indeed a bug logged on this and whether it being (has been) fixed or not, but I have not received any answers. If anyone is interested I can mail you a copy of my kernel configuration. cheers Johan --- Brock Witherspoon <wbwither@bobball.uchicago.edu> wrote:
Gr�ss,
I have 4 of these systems that will go into production soon. 3 of them are exactly the same, while the 4th has different hard drives. All of them are experiencing the same problem. I have gone from FreeBSD 5.2.1 to Mandrake 10 (amd64 RC1) to SuSE 9.1 Professional (DVD box set) in search of something that will actually work and perform well. I just used YaST to download the most recent kernel patches, so I'm now at 2.6.4-54-5-smp.
Here is my configuration:
MSI K8T Master2-FAR 2x Opteron 242 2x 512MB OCZ DDR400 RAM ("Specially Engineered and Optimized for the AMD Athlon64 FX platform") 2x WD Raptor 36GB (also WD 160GB SATA drives) 52x IDE CD-ROM 360 Watt Thermaltake PS
These will be servers, so they have no video cards in them (therefore no X etc.). I used an old ViRGE PCI card to do the install. (ncurses based YaST could use some work BTW, particularly in setting up the partitions)
I have a fresh install of SuSE 9.1 Professional, installed off of the DVD in the box set. I had to disable acpi in order to install, and naturally I went with the Linux software RAID to raid the disks.
Watching top is very interesting. Right before a crash, the wa (I/O wait) CPU % goes way up, 50% or more. Then the system becomes unresponsive.
I've searched the 'net and this list and still haven't come up with a thorough prescription for how to fix my problem. I seemed to have made some progress after enabling the libata driver, as the test machine lasted a lot longer this time before failing.
I've got INITRD_MODULES="via_sata libata reiserfs raid1" in /etc/sysconfig/kernel and append = "resume=/dev/hda11 splash=silent acpi=off apm=off console=tty0" in /etc/lilo.conf.
I'm not sure if the libata module "took"; someone else said that I needed to see my SATA drives as sd# rather than hd#. Well, in the boot messages the drives were still called hda and hdc. I am unfamiliar with the entire kernel rebuilding process in Linux, and with how the modules work.
Mostly I just want to hear from somebody else who has this motherboard, exactly what they did to get everything working, because I've tried everything I'm capable of doing. I've found some helpful things so far, but I haven't yet hit the magic bullet.
Of course I can put up any further needed information. I've just got to get these machines working!
Thank you,
Brock Witherspoon
-- Check the List-Unsubscribe header to unsubscribe For additional commands, email: suse-amd64-help@suse.com
__________________________________ Do you Yahoo!? New and Improved Yahoo! Mail - 100MB free storage! http://promotions.yahoo.com/new_mail
participants (6)
-
Brian Hall
-
Brock Witherspoon
-
Eric Laruelle
-
Joel Wiramu Pauling
-
Johan Backlund
-
rkimber@ntlworld.com