On Saturday, February 24, 2018, Carlos E. R. <robin.listas@telefonica.net> wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On Saturday, 2018-02-24 at 12:45 +0100, Per Jessen wrote:
Greg Freemyer wrote:
Nah. They're old servers, only 3Gbps SATA and PCI-x, we use multiple 1GigE cards, some with bonding. If the SSD caches don't add anything significant, we'll upgrade the motherboards and controllers. The chassis is still perfect.
I have not seen clear cases for SSD cache in Linux. There are two or three alternatives, and at least one of them has been abandoned. Others are too complicated. I considered using an SSD as cache on my desktop, but had to abandon the idea.
If you get something conclusive I'd be interested to know, although your use case is very different from mine.
Carlos, It is way too use case specific for my findings to be useful. I work with data sets that are too big to leverage the typical kernel block buffering mechanism, even on 64 GB machines. As an example, Friday I had to confirm a 150GB tar file (*.tgz) provided to me on a thumb drive wasn't encrypted. I didn't give it a whole lot of thought, I copied it to my laptop's rotating drive and started to untar it. After an hour I realized I made a mistake and killed the untar. A few hundred thousand files had been extracted at that point. A SSD cache I believe would have made that job far faster, but note it would need to also function as a write cache. I don't know if the Linux ssd cache schemes offer write-caching. In the meantime, my colleagues at work told me late Friday that we should consider using this opportunity to replace our 2010 era VMware ESXi server with a newer one (still used, but maybe a 2012 server design with 2015 released CPUs like the E5-4527 (uses DDR3 ram)). So, the land-of-ever-changing-specs continues to exist. If we indeed go that route, VMware's vSphere package (~$4500) supports multi-node ESXi setups (including fail-over) using a SSD in the host hypervisor node as a disk cache, but in write-through mode only. Ie. Writes are not accelerated by the cache, but subsequent reads don't have to go to disk. The contents of the cache can even move between 2 nodes if a VM is switched to a different node for load balancing reasons. Load balancing VMs between ESXi nodes is outside my personal knowledge base at the moment, but maybe it is headed my way. Putting the SSD cache in the main server has a lot of merit because it would allow the speed of NVMe SSDs to be leveraged. Then maybe another SSD cache in the backend shared storage server to perform write-caching! The one in the storage server could be a cheaper SATA interfaced SSD without any performance hit in all likelihood. Greg -- To unsubscribe, e-mail: opensuse+unsubscribe@opensuse.org To contact the owner, e-mail: opensuse+owner@opensuse.org