How to configure VMware ESXi 6.5.x for VMDirectPath I/O pass-through of any NVMe SSD, such as Windows Server 2016 installed directly on an Intel Optane P4800X booted in an EFI VM
There was a little-discussed and finally mature capability that arrived with VMware ESXi 6.5, back in November of 2016. With last month's release of ESXi 6.5 Update 1, it's only gotten easier to configure your ESXi server to allow you to pass a single NVMe storage device, such as an Intel Optane P4800X, right through to one of your VMs. Nice! This has big implications for those interested in performant NAS VMs, or other special use cases including nested ESXi instances and vSAN.
Another nifty attribute of NVMe passthrough is that there's no need to configure clunky RDM (Raw Device Mappings), like you might need do for SAS or SATA drives that are configured as JBOD but share a PCIe device. How is that? Read onward...
With NVMe passthrough, aka VMDirectPath I/O aka VT-d, each NVMe device has its very own entry on the ESXi Host Client's Hardware tab, PCI Devices area, even if you have up to 4 on a single PCIe adapter such as the Amfeltec Squid PCI Express Gen3 Carrier Board for 4 M.2 SSD modules. In my Bundle 2 Supermicro SuperServer SYS-5028D-TN4T Xeon D-1541 system, BIOS 1.2a lets me set 4x4x4x4 bifurcation of the PCIe slot, and that's where the magic begins. Don't worry, despite the fancy word that basically means it can split the signal 4 ways. It's not complicated, and I've documented where the setting is located on this server in the Recommended BIOS Settings. Since Xeon D is the full speed PCIe 3.0 x 16 lanes, each M.2 NVMe device gets its own 4 lanes. That also means that each M.2 NVMe device runs at full speed, even when all 4 M.2 devices are accessed concurrently. Each NVMe device can be assigned to one VM, or different VMs, even nested ESXi instances. Just think of the potential here for vSAN!
In this particular example that I recorded on video below, I went with just one NVMe device, the Intel Optane P4800X. While I could have normally just set the BIOS to UEFI and installed Windows Server 2016 Standard Desktop Experience on the drive directly, then converted that same Windows install to boot from inside an EFI VM, I went the simpler route to keep the video concise.
I demonstrate how you create an empty drive-less VM, then boot the blank NVMe drive inside it, installing Windows Server 2016 right on there. If you have an OS on there already, no worries. Huh? Wait, there's more!
That NVMe drive can be booted natively as well, without ESXi, whether or not you installed that OS when the NVMe was running inside a VM, or outside a VM, natively. Yes, NVMe boots booth ways!
Note, this particular P4800X is just an engineering sample with a test firmware, and it was tested out to boot both ways wonderfully when using an earlier pre-release firmware. That's right, Intel Optane P4800X is so very new that it's not actually even even shipping quite yet. Full disclosure below.
For a bit of backstory on pass-through devices, see also VMware's KB 1010789:
This simple Optane configuration that I've now completed, live on camera for you, will allow me to demonstrate very-nearly-native Intel Optane speeds while at VMworld 2017 US next week, while also running my vSphere 6.5 Update 1 datacenter's other VMs. Nice!
So without further verbiage, let me show you how it's done! While there's a bit more involved than the KB article tells you, it's not difficult either.
- A modern PC with the BIOS set to EFI/UEFI mode
- An NVMe SSD installed, such as M.2 NVMe devices like the Samsung 960 PRO or 960 EVO, or HHHL PCIe devices like the Intel Optane SSD DC P4800X Series.
- VMware ESXi 6.5.x (I tested with 6.5 Update 1)
- With your system powered off and unplugged, install your NVMe device(s)
- Turn on your system
- Use VMware Host Client (browser UI), log in to the name or IP of your host (server)
- Click on Manage / Hardware / PCI Devices / then look for the entry that says "Non-Volatile memory controller" and click the checkbox ON / click Toggle passthrough / click Reboot host
- wait for the reboot to complete, log back in with vSphere Web Client
- create a VM, and in the Customize settings section of the wizard, delete the Hard disk 1, and if there's no OS on your NVMe already, mount your install ISO, in my example, I used my Windows Server 2016 evaluation named
- now click on the section down at the bottom entitled New device: and click on Select, PCI Device, then make sure the added device selection drop down shows the particular NVMe device you're trying to pass through, for this VM's exclusive use, it should prompt you to reserve all memory
- on the VM Options tab, select Boot Options / Firmware / EFI
- finish the VM creation wizard, bring up a VM console view, then power up the VM and be ready to quickly press any key to boot from the bootable ISO, thus beginning your normal installation from your bootable ISO right onto your NVMe device, the only drive that will be visible in that VM
- once your VM is up, install VMware tools and install the recommended NVMe driver from your SSD vendor, then reboot
added Nov 2 2018:
This SSD is on a temporary loan from Intel, and it appears to be an engineering sample, in the 375GB size that is planned for product launch. This loaner was made with no formal expectations or stipulations, just a brief chance to TinkerTry this datacenter technology in my own VMware vSphere home lab. Remember this site's tagline - TinkerTry IT @ home. Efficient virtualization, storage, backup, and more.
- Meet me at the VMworld 2017 US PEX and HCI Zone booths, or at the TinkerTry Virtualization Hardware Discussion Zone in the VMTN Community Blogger area Tue. Aug. 29 11am-2pm
Aug 17 2017
- Windows Server 2016 ISO now available for download, Microsoft offering free datacenter licenses to VMware users
Oct 01 2016
- How to boot Windows 10 from NVMe based PCIe storage, featuring Samsung 950 PRO M.2 SSD in a Supermicro SYS-5028D-TN4T
Nov 05 2015