On Sat, Dec 2, 2017 at 7:40 PM, L A Walsh <suse@tlinx.org> wrote:
Greg Freemyer wrote:
I just don't want people to think in general NVMe SSDs with a $25 adapter is never the best option for desktop PCs.
...
For me, I hope I've bought my last SATA interfaced SSD. It's a old, slow legacy solution as far as I'm concerned.
---- A large factor might be based on how many slots one has for each (PCIx v SATA ports) and what the PCIx requirements are: i.e. is it fine in a PCIx-1? How many lanes is your PCIx attachment? Seems like that would strongly affect I/O rates you would get.
You can get 2 or 4 lane cards. They cost more, but you can buy a single slot adapter card that will hold 4 NVMe SSDs.
Another factor might be other usage -- like if you are wanting to include it in a RAID -- I noted that most RAID controllers didn't work with on-bus (NVMe) disks.
No disagreement, but I don't build RAID with SSDs either. Rotating rust for large capacity. NVMe for speed.
The benchmarks are for SSD-drives that take the place of hard-drives -- not motherboard attached ramdisks.
NVMe SSDs are current generation hard drives. They aren't ramdisks in any sense of the word. I bought my first one 2 years ago.
----
That you can't boot from.
You can in new systems. I have 2 PCs booting from NVMe.
FWIW, I took a 4-disk SSD-RAID0 out of a non-booting system and put it into another, same model machine, and was able to boot it flawlessly.
That's because the controller includes a standalone bios. NVMe doesn't use a separate bios. The main bios has to have NVMe support if you want to boot off of it.
For that reason, I wouldn't call them hardware-based drives. From your description, it's memory that is physically attached via a PCIx interface to the motherboard, no?
It is the same storage chips your SATA SSDs use. The hold data even when disconnected from power. The SATA interface chips are replaced with PCIe interface chips. In legacy SSDs, the SATA interface chips are the throughput bottleneck, not the storage chips.
Memory attached to the system motherboard, used to be how memory was designed into the system before CPU speeds were so much faster than MB speeds (mostly gen-1 PC's and computers before that).
The website I cited, doesn't consider them in its bench marks as they aren't hard-disk compatible (can be thrown in another pc to boot from, as 1 example).
That's like saying "SATA drives aren't hard drives. My 1999 era PC doesn't have SATA ports, so they aren't portable between all PCs." I work with my client PCs routinely. 10 in the last 2 weeks. 2 of those 10 had exclusively NVMe SSD for storage. It may be leading edge now, but it isn't bleeding edge any longer. fyi: I think my first client owned PC with NVMe storage was in 2015. At the time, I had no idea they would show up as /dev/nvme0n1, etc.
That DOESN'T mean they might not be a good value for the money.
But sometimes being able to take the drive out and put it in another system w/no loss of data or functionality is important. It's too bad Windows can't simply use that type of drive as a cache for a regular HD like linux is supposed to be able to.
I use Windows 10 with a NVMe SSD exactly like that. I had to buy "PrimoCache" to manage the cache. It works well.
The $25 adapters only negative that I know of is that you can't boot off of them typically.
But, motherboards that have the built-in adapter also have boot support. Laptops with NVMe SSD support can also boot off them.
--- I would hope so..
fyi: it's a bit on the extreme side, but a client had a 2 TB NVMe SSD
--- nice, wonder how much that cost...
A 2TB Samsung PM961 NVMe SSD is about $1300. I bought exactly that in Jan 2017. I bought a $25 adapter and use it to hold data, even though that motherboard supports NVMe booting. I simply haven't taken the time to reorganize it to boot off the PM961. Greg -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org