Listmates, Looking at hdparm -tT results for a Linux Software raid 1 array, I see some differences in the read performance that I don't know whether it is just individual drive difference or whether mdraid may have something to do with it. The mdraid 1 setup is on a pair of 750G Seagate drives: /dev/sdb: ST3750330AS: 34°C /dev/sdc: ST3750330AS: 36°C The raid1 devices are /dev/sdb1 and /dev/sdc1 which take up the entire disk on each. I don't know if there is a reason not to use the whole drive in the array, but the array is basically a temporary setup on my sons computer (with open SATA port) to throw the drives in to get a week or so of runtime on them before moving them over to a server. Checking read performance with hdparm of each individual drive and then of the array itself shows a small performance difference/drop for /dev/sdc: 02:11 KillerZ~> for i in sdb sdc md0; do sudo hdparm -tT /dev/$i; done /dev/sdb: Timing cached reads: 2130 MB in 2.00 seconds = 1064.71 MB/sec Timing buffered disk reads: 328 MB in 3.01 seconds = 108.94 MB/sec /dev/sdc: Timing cached reads: 2088 MB in 2.00 seconds = 1044.08 MB/sec Timing buffered disk reads: 292 MB in 3.02 seconds = 96.84 MB/sec /dev/md0: Timing cached reads: 2116 MB in 2.00 seconds = 1057.94 MB/sec Timing buffered disk reads: 322 MB in 3.01 seconds = 107.13 MB/sec Thinking about it (dangerous thing to do) the questions arose of (1) how does mdraid actually work, is there a primary drive that is uses for read/write while the other is there for sync/backup? (2) If you can't write simultaneously to both at the same time using the same clock cycle (can you??) so somewhere it seems there must be a write/copy function in the mdraid scheme, and if the drives are slaved together by my the data in mdadm.conf, could the read performance difference being seen with /dev/sdc above be the result of mdraid function and not simply a drive variance? I haven't a clue, that's why I'm asking. And then there is the /dev/md0 hdparm performance which looks like it takes the better of the two performance stats, less a tiny amount of mdraid overhead. How does that work? Or is the difference there more of a simple weighted averaging of the drive performance between sdb and sdc?? Any software raid gurus willing to shed a bit of light on this situation. -- David C. Rankin, J.D.,P.E. Rankin Law Firm, PLLC 510 Ochiltree Street Nacogdoches, Texas 75961 Telephone: (936) 715-9333 Facsimile: (936) 715-9339 www.rankinlawfirm.com -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org For additional commands, e-mail: opensuse+help@opensuse.org