Re: [SLE] Linux Success and Question
Steve Graegert wrote:
On 10/7/05, James Wright
wrote: [snipped intro]
Anyways, what can I do to find out if I have a physical problem with my drive, and if I do, can I mark a section of the hard drive as 'bad' so that it will not be used? I can't afford a new hard drive, and desperately need this laptop as it is my only computer. Thank you for any help or suggestions.
To find out if your reiserfs partition(s) have bad blocks you can run
/sbin/badblocks [-b <reiserfs-block-size>] device > badblocks.lst
Default block size is 4K but if unsure use 'debugreiserfs device' to find out. To fix bad blocks on a reiserfs issue
reiserfsck --badblocks file device
with 'file' as the list of all bad block found by the badblocks program. Some data on this filesystem may got lost, so you might want to backup all bad blocks before fixing them. Try dd_rescue and run reiserfsck on the backup.
To create a reiserfs on a partition with badblocks, supply the --badblocks argument to mkreiserfs.
\Steve
Ok, I ran: badblocks -b 4096 /dev/hda2 > badblocks.lst and stopped it after 13 hours. How long does this run on a 20 Gig drive? Anyways, I think my best bet is to partition off the 'bad' area, like James Knott said. What is my best bet to determine the start and end of the 'bad' section? Do I need to run the above command until it stops on its own? The output of badblocks.lst (to the point where I stopped it) is: 452672 452732 452733 500920 500936 501184 501198 501264 501283 501367 501451 501535 501619 521768 521814 521880 521898 533352 533395 533479 533563 533648 533896 533910 545432 545491 549704 549751 549835 550400 550435 550603 550687 550771 551336 551371 551455 557584 557587 557671 557755 557840 557924 558048 558102 558168 558187 558271 558355 558439 558523 558607 558692 558776 558860 558984 559039 572464 572492 572576 572577 572660 572661 572839 572923 573007 573008 573091 573092 573175 573176 573259 573260 573344 573428 573512 573513 573691 573775 573776 573859 573860 573943 573944 574027 574028 574111 574112 574195 574196 574280 574281 574364 574365 574542 574543 574627 574628 574711 574712 574795 574879 574880 574963 575047 575049 575132 575133 575216 575336 575394 575456 575479 575563 575647 581776 581779 581863 581947 582032 582116 582240 582294 582360 582379 582463 582547 582631 582715 582799 582884 582968 583052 583176 583231 583315 583399 583483 583567 583651 583736 583820 583904 584024 584083 584167 584251 584335 584419 584503 584588 584672 584756 584880 584935 585019 585103 585187 585271 585355 585440 585524 585608 585728 585787 585871 585955 586039 586123 586207 586291 586376 586460 586584 586638 586704 586723 586807 586891 586975 587059 587143 587228 587312 587432 587490 587552 587575 587576 587659 587743 608080 608096 611288 611335 611420 611736 611767 Any help is appreciated. James W
James, On Saturday 08 October 2005 08:36, James Wright wrote:
...
Ok, I ran:
badblocks -b 4096 /dev/hda2 > badblocks.lst
and stopped it after 13 hours. How long does this run on a 20 Gig drive? Anyways, I think my best bet is to partition off the 'bad' area, like James Knott said. What is my best bet to determine the start and end of the 'bad' section? Do I need to run the above command until it stops on its own? The output of badblocks.lst (to the point where I stopped it) is:
452672 452732 ...
Many, many fascinating numbers
...
Any help is appreciated.
Get what data you can off that drive then subject it a low-level format. If there are too many bad blocks detected at that point, chuck it. Or use it as a high-tech bauble / trinket / paper weight / doorstop / whatever.
James W
Randall Schulz
On 10/8/05, Randall R Schulz
James,
On Saturday 08 October 2005 08:36, James Wright wrote:
...
Ok, I ran:
badblocks -b 4096 /dev/hda2 > badblocks.lst
and stopped it after 13 hours. How long does this run on a 20 Gig drive? Anyways, I think my best bet is to partition off the 'bad' area, like James Knott said. What is my best bet to determine the start and end of the 'bad' section? Do I need to run the above command until it stops on its own? The output of badblocks.lst (to the point where I stopped it) is:
452672 452732 ...
Many, many fascinating numbers
...
Any help is appreciated.
Get what data you can off that drive then subject it a low-level format. If there are too many bad blocks detected at that point, chuck it. Or use it as a high-tech bauble / trinket / paper weight / doorstop / whatever.
Agree. Locating bad blocks and fixing them is only useful to rescue as much data as possible and get some time to do backups. The fs tools are of limited use since they cannot provide some kind of protection from loss of data. Once bad blocks are found, the disk is likely to fail in near future (for the reasons mentioned in other posts). Simply put: get a new drive. \Steve
On Sat, 8 Oct 2005 19:53:15 +0200, you wrote:
On 10/8/05, Randall R Schulz
wrote: James,
On Saturday 08 October 2005 08:36, James Wright wrote:
...
Ok, I ran:
badblocks -b 4096 /dev/hda2 > badblocks.lst
and stopped it after 13 hours. How long does this run on a 20 Gig drive? Anyways, I think my best bet is to partition off the 'bad' area, like James Knott said. What is my best bet to determine the start and end of the 'bad' section? Do I need to run the above command until it stops on its own? The output of badblocks.lst (to the point where I stopped it) is:
452672 452732 ...
Many, many fascinating numbers
...
Any help is appreciated.
Get what data you can off that drive then subject it a low-level format. If there are too many bad blocks detected at that point, chuck it. Or use it as a high-tech bauble / trinket / paper weight / doorstop / whatever.
Agree. Locating bad blocks and fixing them is only useful to rescue as much data as possible and get some time to do backups. The fs tools are of limited use since they cannot provide some kind of protection from loss of data. Once bad blocks are found, the disk is likely to fail in near future (for the reasons mentioned in other posts). Simply put: get a new drive.
\Steve
Almost no modern IDE drives support 'low-level formatting' - not for years. The servo track is written at the factory and that's it. If you issue the appropriate init call it's just intercepted by the on-board electrics. Mike- -- Mornings: Evolution in action. Only the grumpy will survive. -- Please note - Due to the intense volume of spam, we have installed site-wide spam filters at catherders.com. If email from you bounces, try non-HTML, non-encoded, non-attachments.
Michael, On Saturday 08 October 2005 11:07, Michael W Cocke wrote:
...
Almost no modern IDE drives support 'low-level formatting' - not for years. The servo track is written at the factory and that's it. If you issue the appropriate init call it's just intercepted by the on-board electrics.
IDE drives? Why use them?? Give me at least Ultra160 SCSI drives running at at least 10,000 RPM. CPU speeds have significantly outstripped both main and secondary storage speeds. If you care about performance, it pays to get the fastest RAM and disks you can find (or afford).
Mike-
Randall Schulz
On Saturday 08 October 2005 04:16, Randall R Schulz wrote:
Michael,
On Saturday 08 October 2005 11:07, Michael W Cocke wrote:
...
Almost no modern IDE drives support 'low-level formatting' - not for years. The servo track is written at the factory and that's it. If you issue the appropriate init call it's just intercepted by the on-board electrics.
IDE drives? Why use them?? Give me at least Ultra160 SCSI drives running at at least 10,000 RPM.
...
People, People! Don't make me use smilies!! Of course, I was half serious. I also just installed the first SATA drive in my home system in order to get a high-capacity drive to hold backups. Now, tape... There's a peripheral on which I'm loathe to spend the money it takes to get decent backup hardware. RRS
On Sat, 8 Oct 2005 04:41:10 -0700, you wrote:
On Saturday 08 October 2005 04:16, Randall R Schulz wrote:
Michael,
On Saturday 08 October 2005 11:07, Michael W Cocke wrote:
...
Almost no modern IDE drives support 'low-level formatting' - not for years. The servo track is written at the factory and that's it. If you issue the appropriate init call it's just intercepted by the on-board electrics.
IDE drives? Why use them?? Give me at least Ultra160 SCSI drives running at at least 10,000 RPM.
...
People, People!
Don't make me use smilies!!
Of course, I was half serious. I also just installed the first SATA drive in my home system in order to get a high-capacity drive to hold backups.
Now, tape... There's a peripheral on which I'm loathe to spend the money it takes to get decent backup hardware.
RRS
BRU is pretty decent and costs around 80.00 for a personal license. I used to backup to DAT tape, tried (briefly) backing up to DVD+RW, and then gave up and built a redundant server. There just is NO reasonable way to back up 2.2 Tb... at least on less than Bill G's budget. Mike- -- Mornings: Evolution in action. Only the grumpy will survive. -- Please note - Due to the intense volume of spam, we have installed site-wide spam filters at catherders.com. If email from you bounces, try non-HTML, non-encoded, non-attachments.
On Saturday 08 October 2005 19:11, James Knott wrote:
Randall R Schulz wrote:
Now, tape... There's a peripheral on which I'm loathe to spend the money it takes to get decent backup hardware.
All you need is a 9 track 800 bpi drive. ;-)
Ha! I'm not ready to start a museum just yet! RRS
Randall R Schulz wrote:
On Saturday 08 October 2005 19:11, James Knott wrote:
Randall R Schulz wrote:
Now, tape... There's a peripheral on which I'm loathe to spend the money it takes to get decent backup hardware. All you need is a 9 track 800 bpi drive. ;-)
Ha!
I'm not ready to start a museum just yet!
I used to maintain and repair them, many years ago. Some models could also do 1600 bpi and a few 5760 (IIRC). Lessee now, the device address on Data General computers was 22, hit the program load switch and watch the reels spin... ;-)
On Saturday 08 October 2005 21:31, James Knott wrote:
Randall R Schulz wrote:
On Saturday 08 October 2005 19:11, James Knott wrote:
Randall R Schulz wrote:
Now, tape... There's a peripheral on which I'm loathe to spend the money it takes to get decent backup hardware.
All you need is a 9 track 800 bpi drive. ;-)
Ha!
I'm not ready to start a museum just yet!
I used to maintain and repair them, many years ago. Some models could also do 1600 bpi and a few 5760 (IIRC).
Lessee now, the device address on Data General computers was 22, hit the program load switch and watch the reels spin... ;-)
I remember that! We had an Eclipse system and a Nova 3. The Nova actually had a hard drive was two platters, one was a 5 MB "fixed" drive, and also a 5 MB removable. The Eclipse was the first system I saw with a tape drive that used vacuum to control the tape loops. Brings back memories.
Mitch Thompson wrote:
I'm not ready to start a museum just yet! I used to maintain and repair them, many years ago. Some models could also do 1600 bpi and a few 5760 (IIRC).
Lessee now, the device address on Data General computers was 22, hit the program load switch and watch the reels spin... ;-)
I remember that! We had an Eclipse system and a Nova 3. The Nova actually had a hard drive was two platters, one was a 5 MB "fixed" drive, and also a 5 MB removable. The Eclipse was the first system I saw with a tape drive that used vacuum to control the tape loops.
Brings back memories.
I believe the 5+5 MB drive was called the "Phoenix" and the 10+10 was the "Gemini", though it might be the other way around. While our Eclipse systems had the vacuum columns, the first ones I worked on (~1978), were attached to a Collins system. I can't think of the name of the manufacturer though. They made tape stands for a lot of companines.
James Knott wrote:
While our Eclipse systems had the vacuum columns, the first ones I worked on (~1978), were attached to a Collins system. I can't think of the name of the manufacturer though. They made tape stands for a lot of companines.
Now I remember, the drives were made by Potter.
On Saturday 08 October 2005 22:23, James Knott wrote:
I believe the 5+5 MB drive was called the "Phoenix" and the 10+10 was the "Gemini", though it might be the other way around. While our Eclipse systems had the vacuum columns, the first ones I worked on (~1978), were attached to a Collins system. I can't think of the name of the manufacturer though. They made tape stands for a lot of companines.
I never knew the name of the drive. This was when I was in the Air Force. My first assignment, actually, from 1984 to 1987. Moved on to another shop that used AT&T 3B15s as file servers, and IBM PC-ATs as the workstations. the 3B's had vacuum tape drives, as well. The PC-ATs ran Microsoft Xenix! Right before I retired in 2003, we were cleaning out an old storage area and they actually dragged an old Nova 3 chassis out of the corner. It wasn't the entire rack, just the "guts". It was heading straight to DRMO (Defense Reutilization Management Office) for scrapping. They wouldn't let me have it (probably a good thing: my wife would have killed me!), but I did manage to snag the "CPU" board, which I eventually want to frame and hang on my "I love me" wall.
James, On Saturday 08 October 2005 19:31, James Knott wrote:
Randall R Schulz wrote:
On Saturday 08 October 2005 19:11, James Knott wrote:
Randall R Schulz wrote:
Now, tape... There's a peripheral on which I'm loathe to spend the money it takes to get decent backup hardware.
All you need is a 9 track 800 bpi drive. ;-)
Ha!
I'm not ready to start a museum just yet!
I used to maintain and repair them, many years ago. Some models could also do 1600 bpi and a few 5760 (IIRC).
Yes. We used them for backups at my university computer lab. I worked there for several years doing such menial tasks. I remember one time I had to do a backup and got the order of the arguments to the "dump" command backward. (Scripts you say? The shell then was a feeble shadow of its current self!) Not good. There was little validation of arguments or of device content. I trashed the boot and super blocks. I had a lot of file system reconstruction to do that night! By the way, I think it's 6250 BPI for high-density mode.
Lessee now, the device address on Data General computers was 22, hit the program load switch and watch the reels spin... ;-)
We were a DEC shop (PDP11s of all sorts), but it was similarly straightforward to program those devices. The good old days... Randall Schulz
Randall R Schulz wrote:
Yes. We used them for backups at my university computer lab. I worked there for several years doing such menial tasks. I remember one time I had to do a backup and got the order of the arguments to the "dump" command backward. (Scripts you say? The shell then was a feeble shadow of its current self!) Not good. There was little validation of arguments or of device content. I trashed the boot and super blocks. I had a lot of file system reconstruction to do that night!
I worked on one system where the command to copy a file was: XFER source > destination And the command to delete a file was: XFER source > This meant that if you accidentally hit the "Enter" key (actually carriage return), you'd delete the file you were trying to copy!
By the way, I think it's 6250 BPI for high-density mode.
Yes, that sounds about right.
Lessee now, the device address on Data General computers was 22, hit the program load switch and watch the reels spin... ;-)
We were a DEC shop (PDP11s of all sorts), but it was similarly straightforward to program those devices.
The good old days...
We had several PDP-11s, a few VAX 11/780s and one, count 'em one PDP-8i. Remember the RIM loader? We also had a bunch of various Data General Eclipse & Nova models and some Pr1me and Collins computers.
On Saturday 08 October 2005 21:11, James Knott wrote:
Randall R Schulz wrote:
Now, tape... There's a peripheral on which I'm loathe to spend the money it takes to get decent backup hardware.
All you need is a 9 track 800 bpi drive. ;-)
Well, those of us that started off on TI-99's and Tandy Color Computers will remember backing our BASIC programs to tape...that would be cassette tape. I still remember typing away at a huge BASIC game on my COCO and, rather than immediately saving to the tape, I decided to run it first. It locked up, forcing me to hit the reset button. My first lesson in making sure to do backups. I've gone from tapes, to 5.25" floppies, to 3.5" floppies, to 100M ZIP disks, to CD-RW, to DVD-RW. Now, I back my 250G hard drive to a 160G hard drive. That's all the 160G is for. Mitch
Mitch Thompson wrote:
On Saturday 08 October 2005 21:11, James Knott wrote:
Randall R Schulz wrote:
Now, tape... There's a peripheral on which I'm loathe to spend the money it takes to get decent backup hardware. All you need is a 9 track 800 bpi drive. ;-)
Well, those of us that started off on TI-99's and Tandy Color Computers will remember backing our BASIC programs to tape...that would be cassette tape. I still remember typing away at a huge BASIC game on my COCO and, rather than immediately saving to the tape, I decided to run it first. It locked up, forcing me to hit the reset button. My first lesson in making sure to do backups.
I've gone from tapes, to 5.25" floppies, to 3.5" floppies, to 100M ZIP disks, to CD-RW, to DVD-RW. Now, I back my 250G hard drive to a 160G hard drive. That's all the 160G is for.
I also used to use cassettes with my IMSAI 8080. Almost 30 years ago, I was on a 7 week course for the Datapoint 2200 intelligent terminal. It had dual digital cassette drives. In one of the classes we were doing assembly programming (virtually identical to the Intel 8008, the first 8 bit microprocessor). I wandered around the class a bit and noticed one of the other guys was almost finished typing in his code and I then "accidentally" tripped over the power cord. ;-)
On 10/8/05, Randall R Schulz
Michael,
On Saturday 08 October 2005 11:07, Michael W Cocke wrote:
...
Almost no modern IDE drives support 'low-level formatting' - not for years. The servo track is written at the factory and that's it. If you issue the appropriate init call it's just intercepted by the on-board electrics.
IDE drives? Why use them??
Because not every one can afford a SCSI disk and an appropriate controller and because IDE does a good job for PCs. My workstations in the lab are 100% SCSI, but the user PCs are 100% IDE and I have never had problems with them over the past 8 years. No drive has died, yet (the oldest being 7 years of age).
CPU speeds have significantly outstripped both main and secondary storage speeds. If you care about performance, it pays to get the fastest RAM and disks you can find (or afford).
That's why SCSI is found in professional workstations where performance is the main criteria and IDE in PCs where price is significant. \Steve
On Sat, 8 Oct 2005 04:16:38 -0700, you wrote:
Michael,
On Saturday 08 October 2005 11:07, Michael W Cocke wrote:
...
Almost no modern IDE drives support 'low-level formatting' - not for years. The servo track is written at the factory and that's it. If you issue the appropriate init call it's just intercepted by the on-board electrics.
IDE drives? Why use them?? Give me at least Ultra160 SCSI drives running at at least 10,000 RPM.
CPU speeds have significantly outstripped both main and secondary storage speeds. If you care about performance, it pays to get the fastest RAM and disks you can find (or afford).
Mike-
Randall Schulz
I used to use exclusively SCSI drives, but the price/performance breakpoint just doesn't warrant it anymore, IMHO. If you use a decent IDE drive, don't accept the defaults for DMA speed, and set up your cache properly, the performance is close enough. Not the same - but none of my clients have money to burn. Just for grins, I once priced out what it would cost me to set up my _home_ servers as SCSI160 - I think it came to around 30K, because I'd also have to have custom chassis fabbed, since the largest capacity SCSI is still smaller than you can get from IDEs - I'd need 4 dozen drives... Could be a heat and power problem too! And the original poster is using IDE drives, which is what we were talking about. Mike- -- Mornings: Evolution in action. Only the grumpy will survive. -- Please note - Due to the intense volume of spam, we have installed site-wide spam filters at catherders.com. If email from you bounces, try non-HTML, non-encoded, non-attachments.
On Sat, 2005-10-08 at 14:33 -0400, Michael W Cocke wrote:
I used to use exclusively SCSI drives, but the price/performance breakpoint just doesn't warrant it anymore, IMHO. If you use a decent IDE drive, don't accept the defaults for DMA speed, and set up your cache properly, the performance is close enough. Not the same - but none of my clients have money to burn.
What about SATA disks? -- Roger
On Sat, 08 Oct 2005 20:41:16 +0200, you wrote:
On Sat, 2005-10-08 at 14:33 -0400, Michael W Cocke wrote:
I used to use exclusively SCSI drives, but the price/performance breakpoint just doesn't warrant it anymore, IMHO. If you use a decent IDE drive, don't accept the defaults for DMA speed, and set up your cache properly, the performance is close enough. Not the same - but none of my clients have money to burn.
What about SATA disks?
I tend more toward the paranoid about disk systems... SATA doesn't have enough of a track record to make me happy about using them. Ask me again in a year. Mike- -- Mornings: Evolution in action. Only the grumpy will survive. -- Please note - Due to the intense volume of spam, we have installed site-wide spam filters at catherders.com. If email from you bounces, try non-HTML, non-encoded, non-attachments.
Mike, On Saturday 08 October 2005 12:02, Michael W Cocke wrote:
On Sat, 08 Oct 2005 20:41:16 +0200, you wrote:
On Sat, 2005-10-08 at 14:33 -0400, Michael W Cocke wrote:
I used to use exclusively SCSI drives, but the price/performance breakpoint just doesn't warrant it anymore, IMHO. If you use a decent IDE drive, don't accept the defaults for DMA speed, and set up your cache properly, the performance is close enough. Not the same - but none of my clients have money to burn.
What about SATA disks?
I tend more toward the paranoid about disk systems... SATA doesn't have enough of a track record to make me happy about using them. Ask me again in a year.
I'm not sure I see the logic in this. The comman protocols are identical to IDE (just as IEEE 1394 / FireWire command structure is identical to SCSI), so much of the drive electronics and firmware will be shared between IDE drives (from a given manufacturer and of a given design family) and their SATA counterparts. The actual drive hardware (the electromechanical parts) is independent of the bus used to connect the drive to the system and so the reliability of the mechanical portions has nothing to do with SATA vs. IDE vs. SCSI vs. USB vs. FireWire (etc.). What is it you don't trust?
Mike-
Randall Schulz
On 10/8/05, Randall R Schulz
Mike,
On Saturday 08 October 2005 12:02, Michael W Cocke wrote:
On Sat, 08 Oct 2005 20:41:16 +0200, you wrote:
On Sat, 2005-10-08 at 14:33 -0400, Michael W Cocke wrote:
I used to use exclusively SCSI drives, but the price/performance breakpoint just doesn't warrant it anymore, IMHO. If you use a decent IDE drive, don't accept the defaults for DMA speed, and set up your cache properly, the performance is close enough. Not the same - but none of my clients have money to burn.
What about SATA disks?
I tend more toward the paranoid about disk systems... SATA doesn't have enough of a track record to make me happy about using them. Ask me again in a year.
I'm not sure I see the logic in this.
The comman protocols are identical to IDE (just as IEEE 1394 / FireWire command structure is identical to SCSI), so much of the drive electronics and firmware will be shared between IDE drives (from a given manufacturer and of a given design family) and their SATA counterparts. The actual drive hardware (the electromechanical parts) is independent of the bus used to connect the drive to the system and so the reliability of the mechanical portions has nothing to do with SATA vs. IDE vs. SCSI vs. USB vs. FireWire (etc.).
What is it you don't trust?
I can't comment on Michael's logic, but the company I am working for has ran a field test with SATA drives of different vendors to find out if small servers could be equiped with SATA instead SCSI drives (to reduce costs for our products) and came to the conclusion that SATA failed on almost all counts with reliability as the most disappointing aspect. Circa 12% of all SATA disks failed after 9 to 11 months the report said. I am not to deep into it (and I have no personal experience with SATA) but there seem to be some issues to be solved before SATA is a candidate for serious applications. From this point of view I can see the logic. \Steve
On Sat, 2005-10-08 at 21:49 +0200, Steve Graegert wrote:
I can't comment on Michael's logic, but the company I am working for has ran a field test with SATA drives of different vendors to find out if small servers could be equiped with SATA instead SCSI drives (to reduce costs for our products) and came to the conclusion that SATA failed on almost all counts with reliability as the most disappointing aspect. Circa 12% of all SATA disks failed after 9 to 11 months the report said. I am not to deep into it (and I have no personal experience with SATA) but there seem to be some issues to be solved before SATA is a candidate for serious applications. From this point of view I can see the logic.
Yipes. There goes my evening. I wonder if the study was with different SATA controllers and drives. Or maybe they found one specific drive that is a no no. I would be very curious to know. SATA is starting to be the main interface on many boards. For example, the current no OS DELL, targeted for people who do not want to pay the Microsoft tax (e.g., OSS OS users) is SATA. --- Roger
On 10/8/05, Roger Oberholtzer
On Sat, 2005-10-08 at 21:49 +0200, Steve Graegert wrote:
I can't comment on Michael's logic, but the company I am working for has ran a field test with SATA drives of different vendors to find out if small servers could be equiped with SATA instead SCSI drives (to reduce costs for our products) and came to the conclusion that SATA failed on almost all counts with reliability as the most disappointing aspect. Circa 12% of all SATA disks failed after 9 to 11 months the report said. I am not to deep into it (and I have no personal experience with SATA) but there seem to be some issues to be solved before SATA is a candidate for serious applications. From this point of view I can see the logic.
Yipes. There goes my evening. I wonder if the study was with different SATA controllers and drives. Or maybe they found one specific drive that is a no no. I would be very curious to know. SATA is starting to be the main interface on many boards. For example, the current no OS DELL, targeted for people who do not want to pay the Microsoft tax (e.g., OSS OS users) is SATA.
I actually can't recall the details of this report but I might be able to compile some details on monday. As to my knowledge, the hardware department assembles the systems we ship to our customers and does broad tests with new components on a regular basis. Currently our workstations in the software department are equipped with Asus boards and EIDE disks for the reasons mentioned. \Steve
Steve, On Saturday 08 October 2005 12:49, Steve Graegert wrote:
On 10/8/05, Randall R Schulz
wrote: Mike,
On Saturday 08 October 2005 12:02, Michael W Cocke wrote:
...
What about SATA disks?
I tend more toward the paranoid about disk systems... SATA doesn't have enough of a track record to make me happy about using them. Ask me again in a year.
I'm not sure I see the logic in this.
The comman protocols are identical to IDE (just as IEEE 1394 / FireWire command structure is identical to SCSI), so much of the drive electronics and firmware will be shared between IDE drives (from a given manufacturer and of a given design family) and their SATA counterparts. The actual drive hardware (the electromechanical parts) is independent of the bus used to connect the drive to the system and so the reliability of the mechanical portions has nothing to do with SATA vs. IDE vs. SCSI vs. USB vs. FireWire (etc.).
What is it you don't trust?
I can't comment on Michael's logic, but the company I am working for has ran a field test with SATA drives of different vendors to find out if small servers could be equiped with SATA instead SCSI drives (to reduce costs for our products) and came to the conclusion that SATA failed on almost all counts with reliability as the most disappointing aspect. Circa 12% of all SATA disks failed after 9 to 11 months the report said. I am not to deep into it (and I have no personal experience with SATA) but there seem to be some issues to be solved before SATA is a candidate for serious applications. From this point of view I can see the logic.
So, why was this experiment not run on IDE? For exactly the reasons I outlined above / before, I'd expect a lot of similarity / commonality between the two. If you don't expect IDE to satisfy whatever requirements you have, then I wouldn't expect SATA to meet them, either. It's just a different interconnect bus that eliminates some of the issues with IDE (the ever-problematic master / slave distinction, e.g.) and lowers the cost of the cabling, which is often not trivial. What exactly constitutes the "failures" encountered? If it wasn't in the SATA bus logic, then it has nothing to do with SATA and instead is an issue with some other part of the drive. In particular, if the failure was in the mechanics or actual drive electronics, then the same failures at the same (approximate) rate would be expected from those drives when equipped with IDE instead of SATA. And, for that matter, the same as if those drives were outfitted with SCSI interfaces.
\Steve
Randall Schulz
On Sat, 8 Oct 2005 13:11:21 -0700, you wrote:
Steve,
On Saturday 08 October 2005 12:49, Steve Graegert wrote:
On 10/8/05, Randall R Schulz
wrote: Mike,
On Saturday 08 October 2005 12:02, Michael W Cocke wrote:
...
What about SATA disks?
I tend more toward the paranoid about disk systems... SATA doesn't have enough of a track record to make me happy about using them. Ask me again in a year.
I'm not sure I see the logic in this.
The comman protocols are identical to IDE (just as IEEE 1394 / FireWire command structure is identical to SCSI), so much of the drive electronics and firmware will be shared between IDE drives (from a given manufacturer and of a given design family) and their SATA counterparts. The actual drive hardware (the electromechanical parts) is independent of the bus used to connect the drive to the system and so the reliability of the mechanical portions has nothing to do with SATA vs. IDE vs. SCSI vs. USB vs. FireWire (etc.).
What is it you don't trust?
I can't comment on Michael's logic, but the company I am working for has ran a field test with SATA drives of different vendors to find out if small servers could be equiped with SATA instead SCSI drives (to reduce costs for our products) and came to the conclusion that SATA failed on almost all counts with reliability as the most disappointing aspect. Circa 12% of all SATA disks failed after 9 to 11 months the report said. I am not to deep into it (and I have no personal experience with SATA) but there seem to be some issues to be solved before SATA is a candidate for serious applications. From this point of view I can see the logic.
So, why was this experiment not run on IDE? For exactly the reasons I outlined above / before, I'd expect a lot of similarity / commonality between the two. If you don't expect IDE to satisfy whatever requirements you have, then I wouldn't expect SATA to meet them, either. It's just a different interconnect bus that eliminates some of the issues with IDE (the ever-problematic master / slave distinction, e.g.) and lowers the cost of the cabling, which is often not trivial.
What exactly constitutes the "failures" encountered? If it wasn't in the SATA bus logic, then it has nothing to do with SATA and instead is an issue with some other part of the drive. In particular, if the failure was in the mechanics or actual drive electronics, then the same failures at the same (approximate) rate would be expected from those drives when equipped with IDE instead of SATA. And, for that matter, the same as if those drives were outfitted with SCSI interfaces.
\Steve
Randall Schulz
I've heard parts of this before... "IDE drives use the same mechanics as SCSI drives, so the failure rate, looking at mechanical failures alone, should be identical" - but they weren't - platters seized MUCH more often on IDE drives. I probably saw 10 to 1, IDE to SCSI mechanical failures for around two years straight, across three different brand drives. Don't ask me to explain it, but I know what I experienced. Mike- -- Mornings: Evolution in action. Only the grumpy will survive. -- Please note - Due to the intense volume of spam, we have installed site-wide spam filters at catherders.com. If email from you bounces, try non-HTML, non-encoded, non-attachments.
On Sat, 8 Oct 2005 12:33:56 -0700, you wrote:
Mike,
On Saturday 08 October 2005 12:02, Michael W Cocke wrote:
On Sat, 08 Oct 2005 20:41:16 +0200, you wrote:
On Sat, 2005-10-08 at 14:33 -0400, Michael W Cocke wrote:
I used to use exclusively SCSI drives, but the price/performance breakpoint just doesn't warrant it anymore, IMHO. If you use a decent IDE drive, don't accept the defaults for DMA speed, and set up your cache properly, the performance is close enough. Not the same - but none of my clients have money to burn.
What about SATA disks?
I tend more toward the paranoid about disk systems... SATA doesn't have enough of a track record to make me happy about using them. Ask me again in a year.
I'm not sure I see the logic in this.
The comman protocols are identical to IDE (just as IEEE 1394 / FireWire command structure is identical to SCSI), so much of the drive electronics and firmware will be shared between IDE drives (from a given manufacturer and of a given design family) and their SATA counterparts. The actual drive hardware (the electromechanical parts) is independent of the bus used to connect the drive to the system and so the reliability of the mechanical portions has nothing to do with SATA vs. IDE vs. SCSI vs. USB vs. FireWire (etc.).
What is it you don't trust?
Mike-
Randall Schulz
I didn't have any hard data to point to, but see the other messages in this thread... I just like to wait a bit and let others bleed on the cutting edge, if they're going to. And by the way, I went thru the same (justified in retrospect) procedure when the shift from MFM, RLL, and ESDI drives to IDE drives occurred. And FWIW, Maxtor drives are pure crap lately - I've blown thru two dozen warranty replacements in a bit over a year. Seagates, particularly the 400 Gb's, are holding up well. Mike- -- Mornings: Evolution in action. Only the grumpy will survive. -- Please note - Due to the intense volume of spam, we have installed site-wide spam filters at catherders.com. If email from you bounces, try non-HTML, non-encoded, non-attachments.
On Sat, 2005-10-08 at 15:02 -0400, Michael W Cocke wrote:
On Sat, 08 Oct 2005 20:41:16 +0200, you wrote:
On Sat, 2005-10-08 at 14:33 -0400, Michael W Cocke wrote:
I used to use exclusively SCSI drives, but the price/performance breakpoint just doesn't warrant it anymore, IMHO. If you use a decent IDE drive, don't accept the defaults for DMA speed, and set up your cache properly, the performance is close enough. Not the same - but none of my clients have money to burn.
What about SATA disks?
I tend more toward the paranoid about disk systems... SATA doesn't have enough of a track record to make me happy about using them. Ask me again in a year.
A year is a long time. I am making a gamble that SATA disks will work well in a high volume video capture system. I have choosen SATA because they are a good price for the performance. The system will basically write to the disks once, and they will be kept. That does not eliminate the need for high performance that one time they are written. SCSI seems too extravagant for this, while IDE wants to suck CPU time. In the rest of the data collectiob system, where the disks are continuously written to, SCSI is what we use. -- Roger
On Saturday 08 October 2005 03:02 pm, Michael W Cocke wrote:
I tend more toward the paranoid about disk systems... SATA doesn't have enough of a track record to make me happy about using them. Ask me again in a year.
I've been running PC's since day one of the IBM PC (1981) and usually have about 6 PC's going around the house. I used to be totally SCSI but like you, I just couldn't warrant the cost anymore... but I still have a couple of machines with SCSI, some with IDE and a couple with SATA. In all those years, and having had (a guess) about 35 drives during that time, I don't recall ever losing a drive due to hardware failure... except about two months ago. Yup, a SATA drive, and I would guess it was less than a year old. (Maxtor 250GB) I replaced it and within a month I had an unreadable sector on the new drive (Seagate 250) but was able to recover everything (Acronis TrueImage) and the drive is still working. Doesn't give me warm fuzzies about SATA drives but it's not a very big sample.
On Sat, 2005-10-08 at 15:54 -0400, Bruce Marshall wrote:
On Saturday 08 October 2005 03:02 pm, Michael W Cocke wrote:
I tend more toward the paranoid about disk systems... SATA doesn't have enough of a track record to make me happy about using them. Ask me again in a year.
I've been running PC's since day one of the IBM PC (1981) and usually have about 6 PC's going around the house. I used to be totally SCSI but like you, I just couldn't warrant the cost anymore... but I still have a couple of machines with SCSI, some with IDE and a couple with SATA.
In all those years, and having had (a guess) about 35 drives during that time, I don't recall ever losing a drive due to hardware failure... except about two months ago. Yup, a SATA drive, and I would guess it was less than a year old. (Maxtor 250GB)
I replaced it and within a month I had an unreadable sector on the new drive (Seagate 250) but was able to recover everything (Acronis TrueImage) and the drive is still working.
Doesn't give me warm fuzzies about SATA drives but it's not a very big sample.
Sigh.. Roger
On Sat, 8 Oct 2005 15:54:47 -0400, you wrote:
On Saturday 08 October 2005 03:02 pm, Michael W Cocke wrote:
I tend more toward the paranoid about disk systems... SATA doesn't have enough of a track record to make me happy about using them. Ask me again in a year.
I've been running PC's since day one of the IBM PC (1981) and usually have about 6 PC's going around the house. I used to be totally SCSI but like you, I just couldn't warrant the cost anymore... but I still have a couple of machines with SCSI, some with IDE and a couple with SATA.
In all those years, and having had (a guess) about 35 drives during that time, I don't recall ever losing a drive due to hardware failure... except about two months ago. Yup, a SATA drive, and I would guess it was less than a year old. (Maxtor 250GB)
I replaced it and within a month I had an unreadable sector on the new drive (Seagate 250) but was able to recover everything (Acronis TrueImage) and the drive is still working.
Doesn't give me warm fuzzies about SATA drives but it's not a very big sample.
Sounds like I'm a little older than you. I had the dubious honor of repairing everything from IBM and Burroughs minis to CP/M-80 and MPM/II systems, pre-PC. Speaking purely for my own hardware over the years, I wouldn't hazard a guess how many drives I've had. I used to run the 3rd largest BBS in NJ for one thing, but I've seen easily over 30 hardware faults on disk drives (just mine - I've seen hundreds total) - but oddly, I can count the ones on SCSI drives on one hand. Can't say I understand it, because logically the hardware between SCSI and IDE really should be about the same... but I won't argue with reality this time. Mike- -- Mornings: Evolution in action. Only the grumpy will survive. -- Please note - Due to the intense volume of spam, we have installed site-wide spam filters at catherders.com. If email from you bounces, try non-HTML, non-encoded, non-attachments.
On Saturday 08 October 2005 22.36, Michael W Cocke wrote:
On Sat, 8 Oct 2005 15:54:47 -0400, you wrote:
On Saturday 08 October 2005 03:02 pm, Michael W Cocke wrote:
I tend more toward the paranoid about disk systems... SATA doesn't have enough of a track record to make me happy about using them. Ask me again in a year.
I've been running PC's since day one of the IBM PC (1981) and usually have about 6 PC's going around the house. I used to be totally SCSI but like you, I just couldn't warrant the cost anymore... but I still have a couple of machines with SCSI, some with IDE and a couple with SATA.
In all those years, and having had (a guess) about 35 drives during that time, I don't recall ever losing a drive due to hardware failure... except about two months ago. Yup, a SATA drive, and I would guess it was less than a year old. (Maxtor 250GB)
I replaced it and within a month I had an unreadable sector on the new drive (Seagate 250) but was able to recover everything (Acronis TrueImage) and the drive is still working.
Doesn't give me warm fuzzies about SATA drives but it's not a very big sample.
Sounds like I'm a little older than you. I had the dubious honor of repairing everything from IBM and Burroughs minis to CP/M-80 and MPM/II systems, pre-PC.
Speaking purely for my own hardware over the years, I wouldn't hazard a guess how many drives I've had. I used to run the 3rd largest BBS in NJ for one thing, but I've seen easily over 30 hardware faults on disk drives (just mine - I've seen hundreds total) - but oddly, I can count the ones on SCSI drives on one hand. Can't say I understand it, because logically the hardware between SCSI and IDE really should be about the same... but I won't argue with reality this time.
Mike-
-- Mornings: Evolution in action. Only the grumpy will survive. --
Please note - Due to the intense volume of spam, we have installed site-wide spam filters at catherders.com. If email from you bounces, try non-HTML, non-encoded, non-attachments.
-- Check the headers for your unsubscription address For additional commands send e-mail to suse-linux-e-help@suse.com Also check the archives at http://lists.suse.com Please read the FAQs: suse-linux-e-faq@suse.com
I think it has something to do with heat. The few SCSI's i have seen/used all seem "thicker" and more heat resilient than the IDE's. Some even have heatsink design with small tabs around it. During my years of running PC's and a few High end Workstations, i have had a higher ratio of ATA failures than SCSI. And the few SCSI's that have died has been in use some 5-10 years non stop. So thats 5-600 k hours continous running. So i think they suffered from old age. I have had a fair share of ATA disks die on me, anything from 4G to 200G. And most haven't runned more than a year. And some not even nonstop at that. In my case they seem to start with bad blocks that escalate to really trashed drives. Only one hardware failure as far as i have been able to conclude. (A 80 gig that apparently lost a read head.) Whats more; I have mostly lost drives under windows... And that is something i cant understand. Seems like Windows use the disk system differently. Wears them down or something. Most badblocks start around boot blocks/FAT's (and make you think "Visurs") and the spread out over the disk. They have been virusfree, and freshly defragged and everyting. The disks failing under linux have been in systems running almost non stop for 3 to 5 years. As for the S-ATA / P-ATA difference. As far as i know its just the controllerboard on the disk that differs. The internal drive is the same. Its just a new cabling and signal system. So the hardware that actually "runs" (motor, heads and what not) are the same. Why a S-ATA should be ani different is anybody's guess. One COULD be cabling. Its easier to get a bad connection on a A-ATA plug. It needs to be seated really well unless oxidation and vibrations take the better of it. The older EIDE flat cable is easier to get in a full fit. They dont come loose that easily. That is the same with SCSI. Some connectors (The older ones) sit more tightly than the new ones (eg. LVD) And as usual, cable quality. The better the cable, the better the connection. Its frustrating to check a system for failing cables. Its so intermittent that its almost impossible to catch. Enough ranting.... -- /Rikard ----------------------------------------------------------------------------- email : rikard.j@rikjoh.com web : http://www.rikjoh.com mob : +46 (0)736 19 76 25 ------------------------ Public PGP fingerprint ---------------------------- < 15 28 DF 78 67 98 B2 16 1F D3 FD C5 59 D4 B6 78 46 1C EE 56 >
Michael W Cocke wrote:
Speaking purely for my own hardware over the years, I wouldn't hazard a guess how many drives I've had. I used to run the 3rd largest BBS in NJ for one thing, but I've seen easily over 30 hardware faults on disk drives (just mine - I've seen hundreds total) - but oddly, I can count the ones on SCSI drives on one hand. Can't say I understand it, because logically the hardware between SCSI and IDE really should be about the same... but I won't argue with reality this time.
I wonder if the drive manufactures selected the "better built" drives for SCSI and used the rest for IDE? I seem to recall they did that for RLL vs MFM drives.
On Sat, 2005-10-08 at 16:36 -0400, Michael W Cocke wrote:
On Sat, 8 Oct 2005 15:54:47 -0400, you wrote:
Speaking purely for my own hardware over the years, I wouldn't hazard a guess how many drives I've had. I used to run the 3rd largest BBS in NJ for one thing, but I've seen easily over 30 hardware faults on disk drives (just mine - I've seen hundreds total) - but oddly, I can count the ones on SCSI drives on one hand. Can't say I understand it, because logically the hardware between SCSI and IDE really should be about the same... but I won't argue with reality this time.
Mike-
Perhaps this explains why SCSI drives still have a longer warranty than IDE/SATA drives. Which is why I still use SCSI drives at home that have a five year warranty. -- Ken Schneider UNIX since 1989, linux since 1994, SuSE since 1998
participants (10)
-
Bruce Marshall
-
James Knott
-
James Wright
-
Ken Schneider
-
Michael W Cocke
-
Mitch Thompson
-
Randall R Schulz
-
Rikard Johnels
-
Roger Oberholtzer
-
Steve Graegert