On 10/8/05, Randall R Schulz
Mike,
On Saturday 08 October 2005 12:02, Michael W Cocke wrote:
On Sat, 08 Oct 2005 20:41:16 +0200, you wrote:
On Sat, 2005-10-08 at 14:33 -0400, Michael W Cocke wrote:
I used to use exclusively SCSI drives, but the price/performance breakpoint just doesn't warrant it anymore, IMHO. If you use a decent IDE drive, don't accept the defaults for DMA speed, and set up your cache properly, the performance is close enough. Not the same - but none of my clients have money to burn.
What about SATA disks?
I tend more toward the paranoid about disk systems... SATA doesn't have enough of a track record to make me happy about using them. Ask me again in a year.
I'm not sure I see the logic in this.
The comman protocols are identical to IDE (just as IEEE 1394 / FireWire command structure is identical to SCSI), so much of the drive electronics and firmware will be shared between IDE drives (from a given manufacturer and of a given design family) and their SATA counterparts. The actual drive hardware (the electromechanical parts) is independent of the bus used to connect the drive to the system and so the reliability of the mechanical portions has nothing to do with SATA vs. IDE vs. SCSI vs. USB vs. FireWire (etc.).
What is it you don't trust?
I can't comment on Michael's logic, but the company I am working for has ran a field test with SATA drives of different vendors to find out if small servers could be equiped with SATA instead SCSI drives (to reduce costs for our products) and came to the conclusion that SATA failed on almost all counts with reliability as the most disappointing aspect. Circa 12% of all SATA disks failed after 9 to 11 months the report said. I am not to deep into it (and I have no personal experience with SATA) but there seem to be some issues to be solved before SATA is a candidate for serious applications. From this point of view I can see the logic. \Steve