How to configure VMware ESXi 6.7 or later for VMDirectPath I/O pass-through of any NVMe SSD, such as Windows 10 1809 installed directly on an Intel 900P booted in an EFI VM
There was a little-discussed and finally mature capability that arrived with VMware ESXi 6.5, back in November of 2016.
With this years ESXi 6.7 release, and last month's release of ESXi 6.7 Update 1, it's only getting easier and easier to configure your ESXi server to allow you to pass a single NVMe storage device, such as an Intel Optane P4800X or the Intel Optane SSD 900P Series, right through to one of your VMs. Nice!
This has big implications for those interested in performant NAS VMs, or other special use cases including nested ESXi instances and vSAN.
Another nifty attribute of NVMe passthrough is that there's no need to configure clunky RDM (Raw Device Mappings), like you might need do for SAS or SATA drives that are configured as JBOD but share a PCIe device. How is that? Read onward...
How NVMe pass-through works
With NVMe passthrough, aka VMDirectPath I/O aka VT-d, each NVMe device has its very own entry on the ESXi Host Client's Hardware tab, PCI Devices area, even if you have up to 4 on a single PCIe adapter such as the Amfeltec Squid PCI Express Gen3 Carrier Board for 4 M.2 SSD modules. In my Bundle 2 Supermicro SuperServer SYS-5028D-TN4T Xeon D-1541 system, BIOS 2.0 lets me set 4x4x4x4 bifurcation of the PCIe slot, and that's where the magic begins. Don't worry, despite the fancy word that basically means it can split the signal 4 ways. It's not complicated, and I've documented where the setting is located on this server in the Recommended BIOS Settings. Since Xeon D is the full speed PCIe 3.0 x 16 lanes, each M.2 NVMe device gets its own 4 lanes. That also means that each M.2 NVMe device runs at full speed, even when all 4 M.2 devices are accessed concurrently. Each NVMe device can be assigned to one VM, or different VMs, even nested ESXi instances. Just think of the potential here for vSAN!
In this particular example that I recorded on video below, I went with just one NVMe device, the Intel Optane 900P. While I could have normally just set the BIOS to UEFI and installed Windows 10 1809 on the drive directly, then converted that same Windows install to boot from inside an EFI VM, I went the simpler route to keep the video concise.
I demonstrate how you create an empty drive-less VM, then boot the blank NVMe drive inside it, installing Windows 10 1809 right on there. If you have an OS on there already, no worries. Huh? Wait, there's more!
That NVMe drive can be booted natively as well, without ESXi, whether or not you installed that OS when the NVMe was running inside a VM, or outside a VM, natively. Yes, NVMe boots booth ways!
For a bit of backstory on pass-through devices, see also VMware's KB 1010789:
This simple Optane configuration that I've now completed, live on camera for you.
So without further verbiage, let me show you how it's done! While there's a bit more involved than the KB article tells you, it's not difficult either.
Prerequisites
- A modern PC with the BIOS set to EFI/UEFI mode
- An NVMe SSD installed, such as M.2 NVMe devices like the Samsung 960 PRO or 960 EVO, or HHHL PCIe devices like the Intel Optane SSD DC P4800X Series, or U.2 devices like the 900P.
- VMware ESXi 6.5.x (I tested with 6.7 Update 1)
Step-by-step
- With your system powered off and unplugged, install your NVMe device(s)
- Turn on your system
- Use VMware Client (HTML5/Clarity based browser UI), log in
- Click on Menu v / Host and Clusters / Navigate to the correct host / Configure tab / Hardware / PCI Devices / CONFIGURE PASSTHROUGH then look for the entry that says "Non-Volatile memory controller" which in my example shows as Device Name 900P Series [2.5" SFF]
- click the checkbox ON / click OK / wait for Reboot This Host button to appear and click it, typing in a reason for doing so, then click OK
- wait for the reboot to complete, log back in with vSphere Client
- create a VM, and in the Customize settings section of the wizard, delete the Hard disk 1, and if there's no OS on your NVMe already, mount your install ISO, in my example, I used my Windows 10 1809 that I used the Media Creation Tool to create, and named mine
Windows-1809-creation-tool.iso
- with your VM now created, edit its properties by right-clicking on the VM, and selecting Edit Settings... and on the Virtual Hardware tab, click on the ADD NEW DEVICE button and select PCI Device, then make sure the added device selection drop down shows the particular NVMe device you're trying to pass through
- Click on the PCI device 0 section to expand it to see what the yellow bang warning is saying, click on the Reserve all memory button which takes care of reserving all memory for you
- on the VM Options tab, select Boot Options / Firmware / EFI, but if you're on vSphere/ESXi 6.7, that should be the default
- finish the VM creation wizard, bring up a VM console view, then power up the VM and be ready to quickly press any key to boot from the bootable ISO, thus beginning your normal installation from your bootable ISO right onto your NVMe device, the only drive that will be visible in that VM
- once your VM is up, install VMware tools and install the recommended NVMe driver from your SSD vendor, then reboot
Video
This video shows the same passthrough configuration, but using the HTML5/Clarity UI of the new vSphere Client that now has 100% feature parity with the vSphere Web Client, and more. That's right, no more Adobe Flash, and much snapper to use. Nice!
Disclosure
This SSD was purchased at retail, so I could test it in my own VMware vSphere home lab. Remember this site's tagline: TinkerTry IT @ home. Efficient virtualization, storage, backup, and more.
See also at TinkerTry
- Microsoft Windows 10 and Windows Server 2019 download links for re-released version 1809 October 2018 Update
Nov 20 2018
- Hands-on GIGABYTE Server MB51-PS0 motherboard featuring 4 core Intel Xeon D-2123IT: unboxing in 4K
Aug 20 2018
- How to update any VMware ESXi Hypervisor to the latest using ESXCLI for easy download and install
Aug 14 2018
See summary of reasons why I'm concerned about this drive and don't use it on a daily basis on my Xeon D-1500 based system:
Looking for the ESXi host client way of configuring passthrough, back on vSphere/ESXi 6.5? No problem! Check out this article below:
- World's First Close Look at Intel Optane SSD DC P4800X Series - PCIe NVMe arrives with 375GB of 3D XPoint!
Aug 11 2017