USB 3.1, Flash in DIMM slots, NGFF and Thunderbolt 2 are promising for virtualization at home

Posted by Paul Braren on Aug 1 2013 in
  • ESXi
  • HomeServer
  • Hyper-V
  • Storage
  • It tends to take a while for the latest and greatest in consumer tech to sneak its way into the datacenter. Think about USB 3.0, for example. It can allow a small businesses or test lab to implement much more affordable external storage and backup options. For example, USB 3.0 passthrough allowed me to run an external $190 Mediasonic RAID5 seen as a 5.6TB NTFS drive in a Windows VM under ESXi 4.1 in 2011, and under ESXi 5.0 in 2012, explained here.*

    Is this affordable enclosure officially supported for use with ESXi? No. But does it work? Yes! Official support doesn't tend to deter tinkerers. And it might not matter anyway, given the device is just passed through to the VM and its Windows operating system, which is supported.

    Looking ahead, here's some interesting, recent articles about some promising technologies to keep an eye on.

    I really like the prospects of breaking past those legacy barriers, affordably.

    Here's the speeds, release dates, and standards:

    • .47 Gbps - 2000 - USB 2.0 "High speed"
    • 1.5 Gbps - 2003 - SATA1 aka SATA I
    • 3.0 Gbps - 2003 - SATA2 aka SATA II (many older eSATA and mSATA implementations)
    • 5.0 Gbps - 2008 - USB 3.0 "SuperSpeed"
    • 6.0 Gbps - 2008 - SATA3 aka SATA III (many newer eSATA and mSATA implementations)
    • 10.0 Gbps - 2011 - Thunderbolt
    • 10.0 Gbps - 2013 - USB 3.1 "SuperSpeed+, aka SuperSpeed USB 10Gbps"
    • 20.0 Gbps - 2013 - Thunderbolt 2

    I'd rather see a 10x fold in speed jumps between revisions, but hey, it is at least some progress, given how many years have gone by since the last wave of speed bumps.

    Hinderances?

    • There's admittedly many good consumer technologies never seem to really make it to the datacenter, think Thunderbolt / Thunderbolt 2 (aka Light Peak).
    • There's also datacenter focused technologies that never make it into the consumer market. Think Fibre Channel (FC).

    There's a promising new flash technology that I'm personally hoping isn't destined for that latter category.

    In theory, the idea of placing NAND flash on the memory bus makes perfect sense. After all,NAND really is random-access memory (RAM), just like the fancy DRAM chips used in today’s systems. And these memory channels are seriously fast, blowing even PCIe out of the water.

    The result, according to Kevin Wagner, vice president of marketing for Diablo, is flash storage that "acts and behaves more like DRAM than SSDs."

    "We put this on the memory channel, right there with the system memory," Wagner told InfoStor. Sporting the "exact same dimensions as a standard DIMM," installing Diablo's MCS is a plug-and-play affair, with the exception of loading new drivers. A standard DRAM DIMM is required to make the setup work, however "every other slot can be filled with our modules," he said.

    The performance gains are dramatic. The company estimates that by configuring MCS as a block storage target, latencies are slashed by 85 percent compared to PCIe SSDs. The gains are even more pronounced compared to SATA and SAS SSDs (96 percent reduction).

    The industry has guided Flash thorough two major platform generations – from SAN and NAS network-attached arrays and appliances to server-attached PCI-Express based products – amidst an ongoing debate about how and where to deploy Flash storage to derive better application performance.

    Flash technology is poised to embark on the third era of platform innovation with Memory Channel Storage™ , or MCS™, a new technology that resets the performance standards for Flash-based storage.
    One reason I tend to enjoy reading about what might be coming up next is because I also enjoy trying out leading edge technologies on hypervisors, to see which PCIe devices will be allowed to passthrough. Read more about VMDirectPath / VT-d / Passthrough here. The lack of folks blogging about this sort of thing was part of what inspired me to start TinkerTry.com in the first place, back in June of 2011.

    I'm really eager to know is how things will turn out as far as passthrough capabilities for the latest hypervisors, due to arrive soon.

    • Microsoft Hyper-V flavor that'll arrive with Windows Server 2012 R2 (end of year). Imagine automated storage tiering that'll automatically handle moving your most active VMs to faster SSD based datastores.
    • VMware ESXi 5.5/ESXi 6.0, or whatever they'll be calling their next big release. I'm guessing it'll be announced at VMworld 2013 on Monday, August 26 2013, with the GA code available that same day for download.

    I'll be covering the next VMware release in depth, and I have a history of the very first in world walk through videos and howtos, so be sure to follow. Here's an example from last year, with in depth 5.1 coverage and demonstrations, on the day of its announcement.

    If a hypervisor can support the passing through (aka "pinning") of a consumer focused PCIe card to a particular VM, then all that's needed for that VM to support that PCIe device are native drivers, for example. So imagine a Windows 8.1 VM running with full USB 3.1 support. Could be interesting, even if not officially supported by the hypervisor vendor. For passthrough to work, it turns out that PCIe tends to work better than such features built into motherboards, my success with the HighPoint RocketU 1144A documented over here.

    So for a client OS in that VM is all that's really needed to use that consumer focused PCIe card technology in a home lab. Make sense? Comment below!

    See also:

    *ESXi 5.1 has been more problematic for reliable pass through functionality, but that may just be a temporary setback on my particular vZilla hardware, where I simply moved over to RDM mappings with eSATA to work around this, explained here. Time will tell if passthrough support comes back and works well, as I'll soon find out when testing on ESXi 5.5/6.0 later this summer.