Re: [opensuse] disappointing SSD experiences
Carl Hartung composed on 2020-11-06 02:13 (UTC-0500):
Aside: The Mushkin device has a 3.9 rating with 60 reviews, whereas the PNY device has a 4.7 rating with 6,522 reviews (G**gle). In general, I have personally had very good experiences with PNY products in the past, including one of their 120 GB SSDs.
Mushkin replacement took about 3 weeks. I put it in last night on discovering performance of the PNY had plummeted down to less than half the speed of SATA2 rotating rust: # hdparm -tT /dev/sda /dev/sda: Timing cached reads: 14766 MB in 1.99 seconds = 7416.24 MB/sec Timing buffered disk reads: 160 MB in 3.03 seconds = 52.86 MB/sec # hdparm -tT /dev/sda /dev/sda: Timing cached reads: 14320 MB in 1.99 seconds = 7187.20 MB/sec Timing buffered disk reads: 158 MB in 3.04 seconds = 52.01 MB/sec # hdparm -tT /dev/sda /dev/sda: Timing cached reads: 13832 MB in 1.99 seconds = 6940.82 MB/sec Timing buffered disk reads: 158 MB in 3.03 seconds = 52.19 MB/sec PNY already sent me a request for invoice to get RMA rolling for the CS900 120GB, originally purchased July 2018, RMA replaced March 2019. -- Evolution as taught in public schools, like religion, is based on faith, not on science. Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/
Below is all about SSD replaced via RMA March 2019 (original purchased July 2018). Before receiving fresh RMA authorization from PNY, similar using two different PCs: /dev/sda: Timing cached reads: 13832 MB in 1.99 seconds = 6940.82 MB/sec Timing buffered disk reads: 158 MB in 3.03 seconds = 52.19 MB/sec After receiving RMA authorization from PNY, in preparation to return, wiping from /dev/zero the first 64 sectors, writing a new SSD GPT tables with 12 partitions, then wiping the first one of 7777 GB, and neither formatting nor attempting to mount any of them: # hdparm -tT /dev/sdh /dev/sdh: Timing cached reads: 16440 MB in 1.99 seconds = 8253.75 MB/sec Timing buffered disk reads: 1304 MB in 3.00 seconds = 434.25 MB/sec When originally tested, hdparm -tT reported 8923 & 538 MB/sec. Fstrim has been run weekly via timer. Is there something about normal or abnormal usage that could account for the vastly reduced speed after 11 month's 24/7 uptime, then be significantly, but not entirely, relieved by clearing the existing partitions and writing new tables? Is there any reason not to proceed with the return -- Evolution as taught in public schools, like religion, is based on faith, not on science. Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/
On 04/12/2020 01.20, Felix Miata wrote:
Below is all about SSD replaced via RMA March 2019 (original purchased July 2018).
Before receiving fresh RMA authorization from PNY, similar using two different PCs:
/dev/sda: Timing cached reads: 13832 MB in 1.99 seconds = 6940.82 MB/sec Timing buffered disk reads: 158 MB in 3.03 seconds = 52.19 MB/sec
After receiving RMA authorization from PNY, in preparation to return, wiping from /dev/zero the first 64 sectors, writing a new SSD GPT tables with 12 partitions, then wiping the first one of 7777 GB, and neither formatting nor attempting to mount any of them:
# hdparm -tT /dev/sdh
/dev/sdh: Timing cached reads: 16440 MB in 1.99 seconds = 8253.75 MB/sec Timing buffered disk reads: 1304 MB in 3.00 seconds = 434.25 MB/sec
When originally tested, hdparm -tT reported 8923 & 538 MB/sec.
Fstrim has been run weekly via timer.
Is there something about normal or abnormal usage that could account for the vastly reduced speed after 11 month's 24/7 uptime, then be significantly, but not entirely, relieved by clearing the existing partitions and writing new tables? Is there any reason not to proceed with the return
On rotating rust, you have to measure speed on the same partition each time. On SSD I do not know. The trim status would affect /write/ speed, I understand. And you say you run fstrim regularly. -- Cheers / Saludos, Carlos E. R. (from 15.1 x86_64 at Telcontar)
Carlos E. R. composed on 2020-12-04 12:45 (UTC+0100):
Felix Miata wrote:
Below is all about SSD replaced via RMA March 2019 (original purchased July 2018).
Before receiving fresh RMA authorization from PNY, similar using two different PCs:
/dev/sda: Timing cached reads: 13832 MB in 1.99 seconds = 6940.82 MB/sec Timing buffered disk reads: 158 MB in 3.03 seconds = 52.19 MB/sec
This was on SATA bus.
After receiving RMA authorization from PNY, in preparation to return, wiping from /dev/zero the first 64 sectors, writing a new SSD GPT tables with 12 partitions, then wiping the first one of 7777 GB, and neither formatting nor attempting to mount any of them:
# hdparm -tT /dev/sdh
/dev/sdh: Timing cached reads: 16440 MB in 1.99 seconds = 8253.75 MB/sec Timing buffered disk reads: 1304 MB in 3.00 seconds = 434.25 MB/sec
This was USB connected.
When originally tested, hdparm -tT reported 8923 & 538 MB/sec.
This was SATA connected.
Fstrim has been run weekly via timer.
Is there something about normal or abnormal usage that could account for the vastly reduced speed after 11 month's 24/7 uptime, then be significantly, but not entirely, relieved by clearing the existing partitions and writing new tables? Is there any reason not to proceed with the return
On rotating rust, you have to measure speed on the same partition each time. On SSD I do not know.
The trim status would affect /write/ speed, I understand. And you say you run fstrim regularly. I got the final RMA authorization, but the PNY's USB speed is now consistently in
AFAICT, hdparm -t only measures HDD/SSD speeds, not filesystem or partition speeds. the 43x MB/sec range. On SATA, it's back to 53x MB/sec range, with only partitions created after deleting all, then wiping the first 64 sectors, so no filesystems. I don't want to send it back if it will be a waste of time and postage to result in me getting the same SSD back from PNY, or knowing how to avoid this happening again if it's not the PNY's fault. -- Evolution as taught in public schools, like religion, is based on faith, not on science. Team OS/2 ** Reg. Linux User #211409 ** a11y rocks! Felix Miata *** http://fm.no-ip.com/
On 07/12/2020 01.49, Felix Miata wrote:
Carlos E. R. composed on 2020-12-04 12:45 (UTC+0100):
Felix Miata wrote:
...
Is there something about normal or abnormal usage that could account for the vastly reduced speed after 11 month's 24/7 uptime, then be significantly, but not entirely, relieved by clearing the existing partitions and writing new tables? Is there any reason not to proceed with the return
On rotating rust, you have to measure speed on the same partition each time. On SSD I do not know.
AFAICT, hdparm -t only measures HDD/SSD speeds, not filesystem or partition speeds.
Not really. hdparm measures disk speed, true, but does so using the partition you specify, if you do. You can check yourself doing tests on a rotating disk on several partitions and notice that the speeds differ. hdparm -tT /dev/sdb hdparm -tT /dev/sdb1 hdparm -tT /dev/sdb10 Once I divided a disk in 50 or so equal partitions and tested them all. It was notable faster at about 1/3 of the way. I have not experimented with SSDs, though. 50MB/s is ridiculous. -- Cheers / Saludos, Carlos E. R. (from 15.1 x86_64 at Telcontar)
participants (3)
-
Carlos E. R.
-
Carlos E.R.
-
Felix Miata