On Fri, Jun 03, 2016 at 10:48:05AM +0300, Andrei Borzenkov wrote:
On Fri, Jun 3, 2016 at 10:31 AM, Johannes Thumshirn jthumshirn@suse.de wrote:
On Thu, Jun 02, 2016 at 04:21:22PM -0400, Greg Freemyer wrote:
On Thu, Jun 2, 2016 at 3:56 PM, Andrei Borzenkov arvidjaar@gmail.com wrote:
02.06.2016 16:27, Greg Freemyer пишет:
=== M.2 M Key device using AHCI protocol ===
Samsung XP941 SSD PCIe M Key 2280 MZHPU128HCGM-00000
AHCI is _not_ NVMe!!!
While that is true ...
AHCI is SATA.
... this is not always. M.2 storage can be connected either using SATA to HBA on motherboard or it can be connected using PCIe to PCIe switch on motherboard. In the latter case it can either contain AHCI compatible controller *on M.2 card itself* or it can contain NVMe compatibel controller. In both cases host interface is still PCIe which provides advantage over plain SATA.
The above card seems to be AHCI over PCIe, so no SATA :)
OK, this card above connects the Serial ATA AHCI via PCIe to your CPU instead of using the SoC's or Chipset's AHCI. [1] is a good read for that.
Protocol and kernel driver wise (which I think is what is of interest here) AHCI will be handled by ahci.ko, libahci.ko, scsi_mod.ko and sd.ko. This will give you the beloved /dev/sd* devices. [2] has a nice diagram for the principal operation.
Whereas real NVMe will be handled by nvme-core.ko and nvme.ko giving you /dev/nvme* devices.
The handling of both is _very_ different. The biggest advantage of NVMe for instance would be true multi queue capability. From the hardware as well as the kernel's block layer and hw driver. This means, you can actually submit I/O from different CPU cores and it'll be traveling down the stack on these cores causing no lock contention as it never has to be touched by other cores.
[1] http://www.intel.com/content/www/us/en/io/serial-ata/serial-ata-ahci-spec-re... [2] https://www.thomas-krenn.com/de/wikiDE/images/2/2d/Linux-storage-stack-diagr...
But enough of that, I didn't intend to start a flame war here.
Byte, Johannes