How to configure VMware ESXi 6.7 or later for VMDirectPath I/O pass-through of any NVMe SSD, such as Windows 10 1809 installed directly on an Intel 900P booted in an EFI VM

GIGABYTE-Server-MB51-PS0-Motherboard-running-ESXi-67-off-USB-rotated-180--TinkerTry
Notice the 900P U.2 NVMe SSD above the motherboard

There was a little-discussed and finally mature capability that arrived with VMware ESXi 6.5, back in November of 2016.

B0772XB3MY
See Amazon for the Intel Optane SSD 900P Series, but this consumer drive might not be ideal for most home virtualization lab enthusiasts

With this years ESXi 6.7 release, and last month's release of ESXi 6.7 Update 1, it's only getting easier and easier to configure your ESXi server to allow you to pass a single NVMe storage device, such as an Intel Optane P4800X or the Intel Optane SSD 900P Series, right through to one of your VMs. Nice!

This has big implications for those interested in performant NAS VMs, or other special use cases including nested ESXi instances and vSAN.

Another nifty attribute of NVMe passthrough is that there's no need to configure clunky RDM (Raw Device Mappings), like you might need do for SAS or SATA drives that are configured as JBOD but share a PCIe device. How is that? Read onward...

How NVMe pass-through works

HYPER-M-2-X16-CARD-inside-SYS-5028D-TN4T-cropped--TinkerTry

With NVMe passthrough, aka VMDirectPath I/O aka VT-d, each NVMe device has its very own entry on the ESXi Host Client's Hardware tab, PCI Devices area, even if you have up to 4 on a single PCIe adapter such as the Amfeltec Squid PCI Express Gen3 Carrier Board for 4 M.2 SSD modules. In my Bundle 2 Supermicro SuperServer SYS-5028D-TN4T Xeon D-1541 system, BIOS 2.0 lets me set 4x4x4x4 bifurcation of the PCIe slot, and that's where the magic begins. Don't worry, despite the fancy word that basically means it can split the signal 4 ways. It's not complicated, and I've documented where the setting is located on this server in the Recommended BIOS Settings. Since Xeon D is the full speed PCIe 3.0 x 16 lanes, each M.2 NVMe device gets its own 4 lanes. That also means that each M.2 NVMe device runs at full speed, even when all 4 M.2 devices are accessed concurrently. Each NVMe device can be assigned to one VM, or different VMs, even nested ESXi instances. Just think of the potential here for vSAN!

In this particular example that I recorded on video below, I went with just one NVMe device, the Intel Optane 900P. While I could have normally just set the BIOS to UEFI and installed Windows 10 1809 on the drive directly, then converted that same Windows install to boot from inside an EFI VM, I went the simpler route to keep the video concise.

I demonstrate how you create an empty drive-less VM, then boot the blank NVMe drive inside it, installing Windows 10 1809 right on there. If you have an OS on there already, no worries. Huh? Wait, there's more!

That NVMe drive can be booted natively as well, without ESXi, whether or not you installed that OS when the NVMe was running inside a VM, or outside a VM, natively. Yes, NVMe boots booth ways!

For a bit of backstory on pass-through devices, see also VMware's KB 1010789:

This simple Optane configuration that I've now completed, live on camera for you.

So without further verbiage, let me show you how it's done! While there's a bit more involved than the KB article tells you, it's not difficult either.

Prerequisites

  • A modern PC with the BIOS set to EFI/UEFI mode
  • An NVMe SSD installed, such as M.2 NVMe devices like the Samsung 960 PRO or 960 EVO, or HHHL PCIe devices like the Intel Optane SSD DC P4800X Series, or U.2 devices like the 900P.
  • VMware ESXi 6.5.x (I tested with 6.7 Update 1)

Step-by-step

  1. With your system powered off and unplugged, install your NVMe device(s)
  2. Turn on your system
  3. Use VMware Client (HTML5/Clarity based browser UI), log in
  4. Click on Menu v / Host and Clusters / Navigate to the correct host / Configure tab / Hardware / PCI Devices / CONFIGURE PASSTHROUGH then look for the entry that says "Non-Volatile memory controller" which in my example shows as Device Name 900P Series [2.5" SFF]
  5. click the checkbox ON / click OK / wait for Reboot This Host button to appear and click it, typing in a reason for doing so, then click OK
  6. wait for the reboot to complete, log back in with vSphere Client
  7. create a VM, and in the Customize settings section of the wizard, delete the Hard disk 1, and if there's no OS on your NVMe already, mount your install ISO, in my example, I used my Windows 10 1809 that I used the Media Creation Tool to create, and named mine
    Windows-1809-creation-tool.iso
  8. with your VM now created, edit its properties by right-clicking on the VM, and selecting Edit Settings... and on the Virtual Hardware tab, click on the ADD NEW DEVICE button and select PCI Device, then make sure the added device selection drop down shows the particular NVMe device you're trying to pass through
  9. Click on the PCI device 0 section to expand it to see what the yellow bang warning is saying, click on the Reserve all memory button which takes care of reserving all memory for you
  10. on the VM Options tab, select Boot Options / Firmware / EFI, but if you're on vSphere/ESXi 6.7, that should be the default
  11. finish the VM creation wizard, bring up a VM console view, then power up the VM and be ready to quickly press any key to boot from the bootable ISO, thus beginning your normal installation from your bootable ISO right onto your NVMe device, the only drive that will be visible in that VM
  12. once your VM is up, install VMware tools and install the recommended NVMe driver from your SSD vendor, then reboot

Video

How to pass through Intel Optane 900P NVMe as Win 10 1809 VM's C: drive, using VMware ESXi 6.7U1

This video shows the same passthrough configuration, but using the HTML5/Clarity UI of the new vSphere Client that now has 100% feature parity with the vSphere Web Client, and more. That's right, no more Adobe Flash, and much snapper to use. Nice!

Disclosure

This SSD was purchased at retail, so I could test it in my own VMware vSphere home lab. Remember this site's tagline: TinkerTry IT @ home. Efficient virtualization, storage, backup, and more.


See also at TinkerTry

windows-10-and-windows-server-2019-version-1809-downloads

gigabyte-xeon-d-2100-mb51-ps0-motherboard-first-look-unboxing

easy-update-to-latest-esxi

See summary of reasons why I'm concerned about this drive and don't use it on a daily basis on my Xeon D-1500 based system:

intel-optane-900p-should-be-great-for-home-lab-enthusiasts

Looking for the ESXi host client way of configuring passthrough, back on vSphere/ESXi 6.5? No problem! Check out this article below:

how-to-configure-vmdirectpath-pass-through-of-nvme-on-esxi-6-5-update-1

hands-on-intel-optane-dc-p4800x-series-3d-xpoint-nvme-ssd

check-nvme-ssd-firmware

my-tinkertry-d-xeon-d-bundle-2-supermicro-superserver-bundle-2-of-joy