How to easily update your VMware Hypervisor from 6.x to 6.7 (ESXi 6.7 Build 8169922) using ESXCLI or VUM

Posted by Paul Braren on Apr 18 2018 (updated on Aug 15 2018) in
  • ESXi
  • Virtualization
  • HowTo
  • HomeLab
  • 62 Comments

    This article has largely been superceded/replaced by the newer version, see:

    easy-update-to-latest-esxi

    See also Eric Siebert's Important information to know before upgrading to vSphere 6.7 and Brandon Lee's VMware vSphere ESXi 6.7 New Features Installing and Upgrading.


    3262786803

    The ESXCLI method of updating is a more universal way to upgrade ESXi that works even for the free hypervisor. It's actually a one-liner that side-steps the preferred VUM method for those without VCSA, and/or those without a My VMware account or and expired trial, instead downloading the patch directly. This ESXCLI method doesn't have quite as easy a way to revert (aka roll-back) if things go wrong. If you have access to the latest ESXi ISO, downloading and booting from that and choosing Upgrade is safer. If you have VCSA 6.7, using vSphere Update Manager VUM is safer too. It's a lot more fun to have VCSA in your home lab, and doing so beyond the 60 day trial has gotten a whole lot more affordable too, with the 365 day renewable VMUG Advantage EVALExperience. See also VMware ESXi Upgrade.

    Warning!
    All hypervisor upgrades come with risks, including the slight possibility of losing your network connections, so proceed at your own risk only after reading the entire article, and after backing up your hypervisor first, as detailed below.

    Disclaimer/Disclosure
    I cannot feasibly provide support for your upgrade, especially given the variety of unsupported hardware out there, see full disclaimer at below-left. This article is focused mostly on small home labs, was voluntarily authored, and not associated with my employment at VMware. It is not official documentation. I work in the storage division, separate from the group developing and supporting the hypervisor.

    If you don't have a backup, and you don't have any support contract with VMware (such as VMUG Advantage EVALExperience), you are putting yourself at risk if you don't take a moment to back up your ESXi before proceeding, and note that I have full walk through video below of free Windows software that allows you to do it.

    Don't rush things. At a minimum, even for a home lab, you'll want to read this entire article before patching anything! Special thanks go out to VCDX 194 Matt Kozloski, whose invaluable feedback improved my recent update articles.

    Step 1 - do your homework

    VMware ESXI 6.7 Build 8169922

    Read all three of these KB articles below for details on what this patch fixes. I have some brief excerpts below each link, to encourage you to read each of the source kb articles in their entirety.

    Step 2 - Follow Prerequisites

    Once you've completed ALL of the following preparation steps:

    1. upgraded to the latest VCSA, which is currently 6.5 U1g, see How to easily update your VMware vCenter Server Appliance from 6.x to 6.7 (VCSA 6.7 Build 8217866)
    2. I tend to put my modern systems BIOs setting to UEFI mode (instead of Dual), see details here, as a bit of future proofing. You can read Mike Foley's warnings in Secure Boot for ESXi 6.5 – Hypervisor Assurance

      ...
      Possible upgrade issues
      UEFI secure boot requires that the original VIB signatures are persisted. Older versions of ESXi do not persist the signatures, but the upgrade process updates the VIB signatures.

      If your host was upgraded using the ESXCLI command then your bootloader wasn’t upgraded and doesn’t persist the signatures. When you enable Secure Boot after the upgrade, an error occurs. You can’t use Secure Boot on these installations and will have to re-install from scratch to gain that support.
      ...

      usbit01
    3. Backed up the ESXi 6.x hypervisor you've already installed and configured, for easy roll-back in case things go wrong. If it's on USB or SD, it's best to clone to a new USB drive and boot from it, to be sure your "backup" is good. You can use something like one of the home-lab-friendly and super easy methods such as USB Image Tools under Windows, as detailed by Florian Grehl at virten.net here.
      If you don't wish to do either, at least follow this VMware KB article:
      How to back up ESXi host configuration (2042141)
    4. Ensured your ESXi 6.x host has a working internet connection.
    5. Review the ESXi 6.5 release notes too.
    6. Read this entire article, yes, even the entire set of prerequisites above.

    Step 3 - Perform Upgrade using VUM (vSphere Update Manager)

    Helpful for folks who have VCSA already installed and configured. Instructions coming soon, similar to what you see here and here.

    - OR -

    Step 3 - Perform Upgrade using ESXCLI

    Step-by-Step Instructions

    Download and upgrade to VMware ESXI 6.7 Build 8169922 using the patch bundle that comes directly from the VMware Online Depot

    The entire process including reboot is usually well under 10 minutes, and many of the steps below are optional, making it appear more difficult than it is. Triple-clicking on a line of code below highlights the whole thing with a carriage return, so you can then right-click and copy it into your clipboard, which gets executed immediately upon pasting into your SSH session. If you want to edit the line before it's executed, manually swipe your mouse across each line of code with no trailing spaces at the end.

    1. Open an SSH session (eg. PuTTY) to your ESXi 6.x server
      (if you forgot to enable SSH, here's how)

    2. OPTIONAL - Turn on Maintenance Mode - Or you can just be sure to manually shutdown all the VMs gracefully that you care about, including VCSA. These instructions are geared to a home lab without High Availability enabled. This is also a good time to ensure you've also set ESXi host to automatically gracefully shutdown all VMs upon host reboot, or if you don't use vCenter or VCSA, use this Host Client method.

    3. OPTIONAL - Reboot (Pro Tip courtesy of VCDX 194 Matt Kozloski) - Consider rebooting your ESXi server and maybe even a hard power cycle before updating. Matt explains:

      if people are running on SD cards or USB sticks and they haven't rebooted the server in a LONG time to patch/update, I would strongly recommend doing a reboot of the server before applying any updates. I've seen, more than once, the SD card or the controller goes into some funky state and as ESXi is running largely in memory, it can comes up half patched or not patched at all. A [cold] reboot before update helps with that (again, if a server has been running for a long period of time - like a year+ - since it was rebooted last). Cold (remove the power cables) can be important, if the SD card or USB stick is actually running on an embedded controller like iLO or iDRAC.

    4. OPTIONAL - Firewall allow outbound http requests - This command is likely not needed if you're upgrading from 6.5.x, and is here in case you get an error about https access. I'm trying to make these instructions applicable to the broadest set of readers. Paste the one line below into into your SSH session, then press enter:

      esxcli network firewall ruleset set -e true -r httpClient

      More details about the firewall here.

    5. OPTIONAL - See a list of all available ESXi profiles - VMware's Upgrade or Update a Host with Image Profiles documentation tells you how this command was formed. Paste the one line below into into your SSH session, then press enter:

      esxcli software sources profile list --depot=https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml

      You can cut-and-paste the output from the above command into a spreadsheet if you'd like, so you can then sort it, making it apparent which profile is the most recent.

    6. Dry Run - Taking this extra step will help you be sure of what is about to happen, before it actually happens.
      Here's the simple command to cut-and-paste into your SSH session:
      esxcli software profile update -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml -p ESXi-6.7.0-8169922-standard --dry-run

      If you see some VIBs that are going to be removed that you need, you'll need to be fully prepared to manually re-install them after the actual upgrade below. If it's a network VIB that is used for your ESXi service console, you'll want to be extra careful to re-install that same VIB before rebooting your just-patched host(s). Don't just assume some later VIB version will work fine with your hardware, use what you know works, and carefully double-check the VMware Compatibility Guide for the recommended version.

      Warning! I constructed the right command syntax and tested this with the dry run, but I have not actually tested this upgrade myself yet!

    7. ACTUAL RUN - This is it, the all-in-one download and patch command, assuming your ESXi host has internet access. This will pull down the ESXi Image Profile using https, then it will run the patch script.
      When you paste this line into your SSH session and hit enter, you'll need to be patient, as nothing seems to happen at first. It will take somewhere between roughly 3 to 10 minutes before the completion screen (sample below) appears:

      esxcli software profile update -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml -p ESXi-6.7.0-8169922-standard
    8. OPTIONAL - Firewall disallow outbound http requests - - To return your firewall to how it was before (optional) step 4 above, simply copy and paste the following:

      esxcli network firewall ruleset set -e false -r httpClient
    9. Attention Xeon D-1500 Owners - These 3 lines used to be needed, but so far, Xeon D-1500 systems seem to work fine with the inbox drivers in 6.7 itself.

    10. Reboot - This is needed for the new hypervisor version to be loaded upon restart. You may want to watch the DCUI (local console) as it boots, to see if any errors show up.

      reboot
    11. OPTIONAL - If you turned on Maintenance Mode in step 3 above, you'll need to turn it off.

    12. You're Done! - You may want to be continue with checking whether everything is working correctly after your systems is back up again, but you are done with the update itself. You can also watch DCUI during the boot if you'd like, to see if you spot any warnings.

    13. Test things out - Log in with ESXi Host Client (pointing your browser directly at your IP address or ESXi servername), and be sure everything seems to function fine. You may need to re-map USB devices to VMs that use USB, and you may need to re-map VT-d (passthrough) devices to VMs that use passthrough devices like GPUs.

    14. You're Really Done! - If you're happy that everything seems to be working well, that's a wrap, but keep that backup, just in case you notice something odd later on.

    Version Confirmation

    Now that you've updated and rebooted, various UIs will show your ESXi version, depending upon where you look.

    Host Client:

    • Version: 6.7.0 (Build 8169922)
    • Image profile: (Updated) ESXi-6.5.0-20180304001-standard (VMware, Inc.)

    vSphere Web Client (Flash):

    • Hypervisor: VMware ESXi, 6.7.0, 8169922
    • Image Profile: ESXi-6.7.0-8169922-standard

    vSphere Client (HTML5):

    • Hypervisor: VMware ESXi, 6.7.0, 8169922
    • Image Profile: ESXi-6.7.0-8169922-standard

    SSH session to updated ESXi host:

    vmware -vl
    • VMware ESXi 6.7.0 build-8169922 | VMware ESXi 6.7.0 GA
    uname -a
    • VMkernel xd-1541-5028d.lab.local 6.7.0 #1 SMP Release build-8169922 Apr 3 2018 14:48:22 x86_64 x86_64 x86_64 ESXi

    Notes for Xeon D Owners

    how-to-install-esxi-on-xeon-d-1500-supermicro-superserver

    Video

    Step-by-step video showing me upgrading a Xeon D in my home lab is coming soon. Meanwhile, I do have a video of how easy backing ESXi itself up can be.

    USB Image Tool for Windows easily backs up and restores complete VMware ESXi installed on USB or SD

    REFERENCE

    You should wind up with the same results after this upgrade as folks who upgrade by downloading the full ESXi 6.5 U1 ISO / creating bootable media from that ISO / booting from that media (or mounting the ISO over IPMI/iLO/iDRAC/IMM/iKMV) and booting from it:

    File size: 330.31 MB
    File type: iso
    Name: VMware-VMvisor-Installer-6.7.0-8169922.x86_64.iso
    Release Date: 2018-04-17
    Build Number: 8169922

    installing it, rebooting, patching per instructions below, and rebooting again.


    See also at TinkerTry

    vmug-advantage-has-esxi-and-vcsa-6-7-with-365-day-keys

    easy-update-to-vcsa-67

    downloadvsphere67

    how-to-install-esxi-on-xeon-d-1500-supermicro-superserver

    easy-update-to-esxi-65u1-20180304001-standard

    supermicro-superservers-vcg-updated-to-65u1

    supermicro-sys-e300-9d-superserver-is-the-only-xeon-d-2100-for-home-labs

    meltdown-and-spectre-info

    superservers


    See also

    vsphere-esxi-67-upgrade-guide
    • VMware ESXi Upgrade

      Upgrading Hosts That Have Third-Party Custom VIBs
      A host can have custom vSphere installation bundles (VIBs) installed, for example, for third-party drivers or management agents. When you upgrade an ESXi host to 6.7, all supported custom VIBs are migrated, regardless of whether the VIBs are included in the installer ISO.
      If the host or the installer ISO image contains a VIB that creates a conflict and prevents the upgrade, an error message identifies the VIB that created the conflict. To upgrade the host, take one of the following actions:

      • Remove the VIB that created the conflict from the host and retry the upgrade. If you are using vSphere Update Manager, select the option to remove third-party software modules during the remediation process. For more information, see the Installing and Administering VMware vSphere Update Manager documentation. You can also remove the VIB that created the conflict from the host by using esxcli commands. For more information, see Remove VIBs from a Host.

      • Use the vSphere ESXi Image Builder CLI to create a custom installer ISO image that resolves the conflict. For more information about vSphere ESXi Image Builder CLI installation and usage, see the vCenter Server Installation and Setup documentation.

    GUID-FE668788-1F32-4CB2-845C-5547DD59EB48-cropped

    which hasn't been updated for 6.7, but holds valuable information.

    ESXi-6.5.0

    There is no 6.7 version of this document yet.

    GUID-E51C5DB6-F28E-42E8-ACA4-0EBDD11DF55D

    Upgrade Log

    ESXi 6.7 Upgrade log coming soon, meanwhile, I have the dry run output below, and what the previous 6.5 update looked like here:

    Below, I've pasted the full text of my update. It will help you see what drivers are touched. Just use the horizontal scroll bar or shift + mousewheel to look around, and Ctrl+F to Find stuff quickly:

    As also seen in my video of my previous upgrade, here's the full contents of my ssh session, as I completed my Xeon D-1541 upgrade from
    Version: 6.5.0 Update 1 (Build 7388607)
    to:
    Version: 6.5.0 Update 1 (Build 7967591)

    login as: root
    Using keyboard-interactive authentication.
    Password:
    The time and date of this login have been sent to the system logs.
    
    WARNING:
       All commands run on the ESXi shell are logged and may be included in
       support bundles. Do not provide passwords directly on the command line.
       Most tools can prompt for secrets or accept them from standard input.
    
    VMware offers supported, powerful system administration tools.  Please
    see www.vmware.com/go/sysadmintools for details.
    
    The ESXi Shell can be disabled by an administrative user. See the
    vSphere Security documentation for more information.
    [root@xd-1567-5028d:~] esxcli software profile update -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml -p ESXi-6.5.0-20180304001-standard --dry-run
    Update Result
       Message: Dryrun only, host not changed. The following installers will be applied: [BootBankInstaller]
       Reboot Required: true
       VIBs Installed: VMware_bootbank_cpu-microcode_6.5.0-1.41.7967591, VMware_bootbank_esx-base_6.5.0-1.41.7967591, VMware_bootbank_esx-tboot_6.5.0-1.41.7967591, VMware_bootbank_vsan_6.5.0-1.41.7547709, VMware_bootbank_vsanhealth_6.5.0-1.41.7547710
       VIBs Removed: VMware_bootbank_cpu-microcode_6.5.0-0.0.4564106, VMware_bootbank_esx-base_6.5.0-1.36.7388607, VMware_bootbank_esx-tboot_6.5.0-1.36.7388607, VMware_bootbank_vsan_6.5.0-1.36.7388608, VMware_bootbank_vsanhealth_6.5.0-1.36.7388609
       VIBs Skipped: VMW_bootbank_ata-libata-92_3.00.9.2-16vmw.650.0.0.4564106, VMW_bootbank_ata-pata-amd_0.3.10-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-atiixp_0.4.6-4vmw.650.0.0.4564106, VMW_bootbank_ata-pata-cmd64x_0.2.5-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-hpt3x2n_0.3.4-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-pdc2027x_1.0-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-serverworks_0.4.3-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-sil680_0.4.8-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-via_0.3.3-2vmw.650.0.0.4564106, VMW_bootbank_block-cciss_3.6.14-10vmw.650.0.0.4564106, VMW_bootbank_char-random_1.0-3vmw.650.0.0.4564106, VMW_bootbank_ehci-ehci-hcd_1.0-4vmw.650.0.14.5146846, VMW_bootbank_elxnet_11.1.91.0-1vmw.650.0.0.4564106, VMW_bootbank_hid-hid_1.0-3vmw.650.0.0.4564106, VMW_bootbank_i40en_1.3.1-5vmw.650.1.26.5969303, VMW_bootbank_igbn_0.1.0.0-15vmw.650.1.36.7388607, VMW_bootbank_ima-qla4xxx_2.02.18-1vmw.650.0.0.4564106, VMW_bootbank_ipmi-ipmi-devintf_39.1-4vmw.650.0.0.4564106, VMW_bootbank_ipmi-ipmi-msghandler_39.1-4vmw.650.0.0.4564106, VMW_bootbank_ipmi-ipmi-si-drv_39.1-4vmw.650.0.0.4564106, VMW_bootbank_ixgben_1.4.1-2vmw.650.1.26.5969303, VMW_bootbank_lpfc_11.1.0.6-1vmw.650.0.0.4564106, VMW_bootbank_lsi-mr3_6.910.18.00-1vmw.650.0.0.4564106, VMW_bootbank_lsi-msgpt2_20.00.01.00-3vmw.650.0.0.4564106, VMW_bootbank_lsi-msgpt3_12.00.02.00-11vmw.650.0.0.4564106, VMW_bootbank_misc-cnic-register_1.78.75.v60.7-1vmw.650.0.0.4564106, VMW_bootbank_misc-drivers_6.5.0-1.36.7388607, VMW_bootbank_mtip32xx-native_3.9.5-1vmw.650.0.0.4564106, VMW_bootbank_ne1000_0.8.0-16vmw.650.1.26.5969303, VMW_bootbank_nenic_1.0.0.2-1vmw.650.0.0.4564106, VMW_bootbank_net-bnx2_2.2.4f.v60.10-2vmw.650.0.0.4564106, VMW_bootbank_net-bnx2x_1.78.80.v60.12-1vmw.650.0.0.4564106, VMW_bootbank_net-cdc-ether_1.0-3vmw.650.0.0.4564106, VMW_bootbank_net-cnic_1.78.76.v60.13-2vmw.650.0.0.4564106, VMW_bootbank_net-e1000_8.0.3.1-5vmw.650.0.0.4564106, VMW_bootbank_net-e1000e_3.2.2.1-2vmw.650.0.0.4564106, VMW_bootbank_net-enic_2.1.2.38-2vmw.650.0.0.4564106, VMW_bootbank_net-fcoe_1.0.29.9.3-7vmw.650.0.0.4564106, VMW_bootbank_net-forcedeth_0.61-2vmw.650.0.0.4564106, VMW_bootbank_net-igb_5.0.5.1.1-5vmw.650.0.0.4564106, VMW_bootbank_net-ixgbe_3.7.13.7.14iov-20vmw.650.0.0.4564106, VMW_bootbank_net-libfcoe-92_1.0.24.9.4-8vmw.650.0.0.4564106, VMW_bootbank_net-mlx4-core_1.9.7.0-1vmw.650.0.0.4564106, VMW_bootbank_net-mlx4-en_1.9.7.0-1vmw.650.0.0.4564106, VMW_bootbank_net-nx-nic_5.0.621-5vmw.650.0.0.4564106, VMW_bootbank_net-tg3_3.131d.v60.4-2vmw.650.0.0.4564106, VMW_bootbank_net-usbnet_1.0-3vmw.650.0.0.4564106, VMW_bootbank_net-vmxnet3_1.1.3.0-3vmw.650.0.0.4564106, VMW_bootbank_nhpsa_2.0.6-3vmw.650.0.0.4564106, VMW_bootbank_nmlx4-core_3.16.0.0-1vmw.650.0.0.4564106, VMW_bootbank_nmlx4-en_3.16.0.0-1vmw.650.0.0.4564106, VMW_bootbank_nmlx4-rdma_3.16.0.0-1vmw.650.0.0.4564106, VMW_bootbank_nmlx5-core_4.16.0.0-1vmw.650.0.0.4564106, VMW_bootbank_ntg3_4.1.3.0-1vmw.650.1.36.7388607, VMW_bootbank_nvme_1.2.0.32-5vmw.650.1.36.7388607, VMW_bootbank_nvmxnet3_2.0.0.23-1vmw.650.1.36.7388607, VMW_bootbank_ohci-usb-ohci_1.0-3vmw.650.0.0.4564106, VMW_bootbank_pvscsi_0.1-1vmw.650.1.26.5969303, VMW_bootbank_qedentv_2.0.3.29-1vmw.650.0.0.4564106, VMW_bootbank_qfle3_1.0.2.7-1vmw.650.0.0.4564106, VMW_bootbank_qflge_1.1.0.3-1vmw.650.0.0.4564106, VMW_bootbank_qlnativefc_2.1.50.0-1vmw.650.1.26.5969303, VMW_bootbank_sata-ahci_3.0-26vmw.650.1.26.5969303, VMW_bootbank_sata-ata-piix_2.12-10vmw.650.0.0.4564106, VMW_bootbank_sata-sata-nv_3.5-4vmw.650.0.0.4564106, VMW_bootbank_sata-sata-promise_2.12-3vmw.650.0.0.4564106, VMW_bootbank_sata-sata-sil24_1.1-1vmw.650.0.0.4564106, VMW_bootbank_sata-sata-sil_2.3-4vmw.650.0.0.4564106, VMW_bootbank_sata-sata-svw_2.3-3vmw.650.0.0.4564106, VMW_bootbank_scsi-aacraid_1.1.5.1-9vmw.650.0.0.4564106, VMW_bootbank_scsi-adp94xx_1.0.8.12-6vmw.650.0.0.4564106, VMW_bootbank_scsi-aic79xx_3.1-5vmw.650.0.0.4564106, VMW_bootbank_scsi-bnx2fc_1.78.78.v60.8-1vmw.650.0.0.4564106, VMW_bootbank_scsi-bnx2i_2.78.76.v60.8-1vmw.650.0.0.4564106, VMW_bootbank_scsi-fnic_1.5.0.45-3vmw.650.0.0.4564106, VMW_bootbank_scsi-hpsa_6.0.0.84-1vmw.650.0.0.4564106, VMW_bootbank_scsi-ips_7.12.05-4vmw.650.0.0.4564106, VMW_bootbank_scsi-iscsi-linux-92_1.0.0.2-3vmw.650.0.0.4564106, VMW_bootbank_scsi-libfc-92_1.0.40.9.3-5vmw.650.0.0.4564106, VMW_bootbank_scsi-megaraid-mbox_2.20.5.1-6vmw.650.0.0.4564106, VMW_bootbank_scsi-megaraid-sas_6.603.55.00-2vmw.650.0.0.4564106, VMW_bootbank_scsi-megaraid2_2.00.4-9vmw.650.0.0.4564106, VMW_bootbank_scsi-mpt2sas_19.00.00.00-1vmw.650.0.0.4564106, VMW_bootbank_scsi-mptsas_4.23.01.00-10vmw.650.0.0.4564106, VMW_bootbank_scsi-mptspi_4.23.01.00-10vmw.650.0.0.4564106, VMW_bootbank_scsi-qla4xxx_5.01.03.2-7vmw.650.0.0.4564106, VMW_bootbank_shim-iscsi-linux-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-iscsi-linux-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libata-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libata-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libfc-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libfc-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libfcoe-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libfcoe-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-vmklinux-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-vmklinux-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-vmklinux-9-2-3-0_6.5.0-0.0.4564106, VMW_bootbank_uhci-usb-uhci_1.0-3vmw.650.0.0.4564106, VMW_bootbank_usb-storage-usb-storage_1.0-3vmw.650.0.0.4564106, VMW_bootbank_usbcore-usb_1.0-3vmw.650.1.26.5969303, VMW_bootbank_vmkata_0.1-1vmw.650.1.36.7388607, VMW_bootbank_vmkplexer-vmkplexer_6.5.0-0.0.4564106, VMW_bootbank_vmkusb_0.1-1vmw.650.1.36.7388607, VMW_bootbank_vmw-ahci_1.0.0-39vmw.650.1.26.5969303, VMW_bootbank_xhci-xhci_1.0-3vmw.650.0.0.4564106, VMware_bootbank_emulex-esx-elxnetcli_11.1.28.0-0.0.4564106, VMware_bootbank_esx-dvfilter-generic-fastpath_6.5.0-1.36.7388607, VMware_bootbank_esx-ui_1.23.0-6506686, VMware_bootbank_esx-xserver_6.5.0-0.23.5969300, VMware_bootbank_lsu-hp-hpsa-plugin_2.0.0-5vmw.650.1.26.5969303, VMware_bootbank_lsu-lsi-lsi-mr3-plugin_1.0.0-10vmw.650.1.26.5969303, VMware_bootbank_lsu-lsi-lsi-msgpt3-plugin_1.0.0-7vmw.650.1.26.5969303, VMware_bootbank_lsu-lsi-megaraid-sas-plugin_1.0.0-8vmw.650.1.26.5969303, VMware_bootbank_lsu-lsi-mpt2sas-plugin_2.0.0-6vmw.650.1.26.5969303, VMware_bootbank_native-misc-drivers_6.5.0-0.0.4564106, VMware_bootbank_rste_2.0.2.0088-4vmw.650.0.0.4564106, VMware_bootbank_vmware-esx-esxcli-nvme-plugin_1.2.0.10-1.26.5969303, VMware_locker_tools-light_6.5.0-1.33.7273056
    [root@xd-1567-5028d:~] esxcli software profile update -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml -p ESXi-6.5.0-20180304001-standard
    Update Result
       Message: The update completed successfully, but the system needs to be rebooted for the changes to be effective.
       Reboot Required: true
       VIBs Installed: VMware_bootbank_cpu-microcode_6.5.0-1.41.7967591, VMware_bootbank_esx-base_6.5.0-1.41.7967591, VMware_bootbank_esx-tboot_6.5.0-1.41.7967591, VMware_bootbank_vsan_6.5.0-1.41.7547709, VMware_bootbank_vsanhealth_6.5.0-1.41.7547710
       VIBs Removed: VMware_bootbank_cpu-microcode_6.5.0-0.0.4564106, VMware_bootbank_esx-base_6.5.0-1.36.7388607, VMware_bootbank_esx-tboot_6.5.0-1.36.7388607, VMware_bootbank_vsan_6.5.0-1.36.7388608, VMware_bootbank_vsanhealth_6.5.0-1.36.7388609
       VIBs Skipped: VMW_bootbank_ata-libata-92_3.00.9.2-16vmw.650.0.0.4564106, VMW_bootbank_ata-pata-amd_0.3.10-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-atiixp_0.4.6-4vmw.650.0.0.4564106, VMW_bootbank_ata-pata-cmd64x_0.2.5-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-hpt3x2n_0.3.4-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-pdc2027x_1.0-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-serverworks_0.4.3-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-sil680_0.4.8-3vmw.650.0.0.4564106, VMW_bootbank_ata-pata-via_0.3.3-2vmw.650.0.0.4564106, VMW_bootbank_block-cciss_3.6.14-10vmw.650.0.0.4564106, VMW_bootbank_char-random_1.0-3vmw.650.0.0.4564106, VMW_bootbank_ehci-ehci-hcd_1.0-4vmw.650.0.14.5146846, VMW_bootbank_elxnet_11.1.91.0-1vmw.650.0.0.4564106, VMW_bootbank_hid-hid_1.0-3vmw.650.0.0.4564106, VMW_bootbank_i40en_1.3.1-5vmw.650.1.26.5969303, VMW_bootbank_igbn_0.1.0.0-15vmw.650.1.36.7388607, VMW_bootbank_ima-qla4xxx_2.02.18-1vmw.650.0.0.4564106, VMW_bootbank_ipmi-ipmi-devintf_39.1-4vmw.650.0.0.4564106, VMW_bootbank_ipmi-ipmi-msghandler_39.1-4vmw.650.0.0.4564106, VMW_bootbank_ipmi-ipmi-si-drv_39.1-4vmw.650.0.0.4564106, VMW_bootbank_ixgben_1.4.1-2vmw.650.1.26.5969303, VMW_bootbank_lpfc_11.1.0.6-1vmw.650.0.0.4564106, VMW_bootbank_lsi-mr3_6.910.18.00-1vmw.650.0.0.4564106, VMW_bootbank_lsi-msgpt2_20.00.01.00-3vmw.650.0.0.4564106, VMW_bootbank_lsi-msgpt3_12.00.02.00-11vmw.650.0.0.4564106, VMW_bootbank_misc-cnic-register_1.78.75.v60.7-1vmw.650.0.0.4564106, VMW_bootbank_misc-drivers_6.5.0-1.36.7388607, VMW_bootbank_mtip32xx-native_3.9.5-1vmw.650.0.0.4564106, VMW_bootbank_ne1000_0.8.0-16vmw.650.1.26.5969303, VMW_bootbank_nenic_1.0.0.2-1vmw.650.0.0.4564106, VMW_bootbank_net-bnx2_2.2.4f.v60.10-2vmw.650.0.0.4564106, VMW_bootbank_net-bnx2x_1.78.80.v60.12-1vmw.650.0.0.4564106, VMW_bootbank_net-cdc-ether_1.0-3vmw.650.0.0.4564106, VMW_bootbank_net-cnic_1.78.76.v60.13-2vmw.650.0.0.4564106, VMW_bootbank_net-e1000_8.0.3.1-5vmw.650.0.0.4564106, VMW_bootbank_net-e1000e_3.2.2.1-2vmw.650.0.0.4564106, VMW_bootbank_net-enic_2.1.2.38-2vmw.650.0.0.4564106, VMW_bootbank_net-fcoe_1.0.29.9.3-7vmw.650.0.0.4564106, VMW_bootbank_net-forcedeth_0.61-2vmw.650.0.0.4564106, VMW_bootbank_net-igb_5.0.5.1.1-5vmw.650.0.0.4564106, VMW_bootbank_net-ixgbe_3.7.13.7.14iov-20vmw.650.0.0.4564106, VMW_bootbank_net-libfcoe-92_1.0.24.9.4-8vmw.650.0.0.4564106, VMW_bootbank_net-mlx4-core_1.9.7.0-1vmw.650.0.0.4564106, VMW_bootbank_net-mlx4-en_1.9.7.0-1vmw.650.0.0.4564106, VMW_bootbank_net-nx-nic_5.0.621-5vmw.650.0.0.4564106, VMW_bootbank_net-tg3_3.131d.v60.4-2vmw.650.0.0.4564106, VMW_bootbank_net-usbnet_1.0-3vmw.650.0.0.4564106, VMW_bootbank_net-vmxnet3_1.1.3.0-3vmw.650.0.0.4564106, VMW_bootbank_nhpsa_2.0.6-3vmw.650.0.0.4564106, VMW_bootbank_nmlx4-core_3.16.0.0-1vmw.650.0.0.4564106, VMW_bootbank_nmlx4-en_3.16.0.0-1vmw.650.0.0.4564106, VMW_bootbank_nmlx4-rdma_3.16.0.0-1vmw.650.0.0.4564106, VMW_bootbank_nmlx5-core_4.16.0.0-1vmw.650.0.0.4564106, VMW_bootbank_ntg3_4.1.3.0-1vmw.650.1.36.7388607, VMW_bootbank_nvme_1.2.0.32-5vmw.650.1.36.7388607, VMW_bootbank_nvmxnet3_2.0.0.23-1vmw.650.1.36.7388607, VMW_bootbank_ohci-usb-ohci_1.0-3vmw.650.0.0.4564106, VMW_bootbank_pvscsi_0.1-1vmw.650.1.26.5969303, VMW_bootbank_qedentv_2.0.3.29-1vmw.650.0.0.4564106, VMW_bootbank_qfle3_1.0.2.7-1vmw.650.0.0.4564106, VMW_bootbank_qflge_1.1.0.3-1vmw.650.0.0.4564106, VMW_bootbank_qlnativefc_2.1.50.0-1vmw.650.1.26.5969303, VMW_bootbank_sata-ahci_3.0-26vmw.650.1.26.5969303, VMW_bootbank_sata-ata-piix_2.12-10vmw.650.0.0.4564106, VMW_bootbank_sata-sata-nv_3.5-4vmw.650.0.0.4564106, VMW_bootbank_sata-sata-promise_2.12-3vmw.650.0.0.4564106, VMW_bootbank_sata-sata-sil24_1.1-1vmw.650.0.0.4564106, VMW_bootbank_sata-sata-sil_2.3-4vmw.650.0.0.4564106, VMW_bootbank_sata-sata-svw_2.3-3vmw.650.0.0.4564106, VMW_bootbank_scsi-aacraid_1.1.5.1-9vmw.650.0.0.4564106, VMW_bootbank_scsi-adp94xx_1.0.8.12-6vmw.650.0.0.4564106, VMW_bootbank_scsi-aic79xx_3.1-5vmw.650.0.0.4564106, VMW_bootbank_scsi-bnx2fc_1.78.78.v60.8-1vmw.650.0.0.4564106, VMW_bootbank_scsi-bnx2i_2.78.76.v60.8-1vmw.650.0.0.4564106, VMW_bootbank_scsi-fnic_1.5.0.45-3vmw.650.0.0.4564106, VMW_bootbank_scsi-hpsa_6.0.0.84-1vmw.650.0.0.4564106, VMW_bootbank_scsi-ips_7.12.05-4vmw.650.0.0.4564106, VMW_bootbank_scsi-iscsi-linux-92_1.0.0.2-3vmw.650.0.0.4564106, VMW_bootbank_scsi-libfc-92_1.0.40.9.3-5vmw.650.0.0.4564106, VMW_bootbank_scsi-megaraid-mbox_2.20.5.1-6vmw.650.0.0.4564106, VMW_bootbank_scsi-megaraid-sas_6.603.55.00-2vmw.650.0.0.4564106, VMW_bootbank_scsi-megaraid2_2.00.4-9vmw.650.0.0.4564106, VMW_bootbank_scsi-mpt2sas_19.00.00.00-1vmw.650.0.0.4564106, VMW_bootbank_scsi-mptsas_4.23.01.00-10vmw.650.0.0.4564106, VMW_bootbank_scsi-mptspi_4.23.01.00-10vmw.650.0.0.4564106, VMW_bootbank_scsi-qla4xxx_5.01.03.2-7vmw.650.0.0.4564106, VMW_bootbank_shim-iscsi-linux-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-iscsi-linux-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libata-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libata-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libfc-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libfc-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libfcoe-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-libfcoe-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-vmklinux-9-2-1-0_6.5.0-0.0.4564106, VMW_bootbank_shim-vmklinux-9-2-2-0_6.5.0-0.0.4564106, VMW_bootbank_shim-vmklinux-9-2-3-0_6.5.0-0.0.4564106, VMW_bootbank_uhci-usb-uhci_1.0-3vmw.650.0.0.4564106, VMW_bootbank_usb-storage-usb-storage_1.0-3vmw.650.0.0.4564106, VMW_bootbank_usbcore-usb_1.0-3vmw.650.1.26.5969303, VMW_bootbank_vmkata_0.1-1vmw.650.1.36.7388607, VMW_bootbank_vmkplexer-vmkplexer_6.5.0-0.0.4564106, VMW_bootbank_vmkusb_0.1-1vmw.650.1.36.7388607, VMW_bootbank_vmw-ahci_1.0.0-39vmw.650.1.26.5969303, VMW_bootbank_xhci-xhci_1.0-3vmw.650.0.0.4564106, VMware_bootbank_emulex-esx-elxnetcli_11.1.28.0-0.0.4564106, VMware_bootbank_esx-dvfilter-generic-fastpath_6.5.0-1.36.7388607, VMware_bootbank_esx-ui_1.23.0-6506686, VMware_bootbank_esx-xserver_6.5.0-0.23.5969300, VMware_bootbank_lsu-hp-hpsa-plugin_2.0.0-5vmw.650.1.26.5969303, VMware_bootbank_lsu-lsi-lsi-mr3-plugin_1.0.0-10vmw.650.1.26.5969303, VMware_bootbank_lsu-lsi-lsi-msgpt3-plugin_1.0.0-7vmw.650.1.26.5969303, VMware_bootbank_lsu-lsi-megaraid-sas-plugin_1.0.0-8vmw.650.1.26.5969303, VMware_bootbank_lsu-lsi-mpt2sas-plugin_2.0.0-6vmw.650.1.26.5969303, VMware_bootbank_native-misc-drivers_6.5.0-0.0.4564106, VMware_bootbank_rste_2.0.2.0088-4vmw.650.0.0.4564106, VMware_bootbank_vmware-esx-esxcli-nvme-plugin_1.2.0.10-1.26.5969303, VMware_locker_tools-light_6.5.0-1.33.7273056
    [root@xd-1567-5028d:~] reboot

    Dry Run Output

    Yes, this is the output I received when typing this on a system that was already at 6.7, as I don't yet have another system available to test the 6.5 to 6.7 ESXICLI upgrade technique with yet. I hope to have more time this weekend for further testing, and updating this article.

    login as: root
    Using keyboard-interactive authentication.
    Password:
    The time and date of this login have been sent to the system logs.
    
    WARNING:
       All commands run on the ESXi shell are logged and may be included in
       support bundles. Do not provide passwords directly on the command line.
       Most tools can prompt for secrets or accept them from standard input.
    
    VMware offers supported, powerful system administration tools.  Please
    see www.vmware.com/go/sysadmintools for details.
    
    The ESXi Shell can be disabled by an administrative user. See the
    vSphere Security documentation for more information.
    [root@xd-1541-5028d:~] esxcli software profile update -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml -p ESXi-6.7.0-8169922-standard --dry-run
    Update Result
       Message: Dryrun only, host not changed. The following installers will be applied: []
       Reboot Required: false
       VIBs Installed:
       VIBs Removed:
       VIBs Skipped: VMW_bootbank_ata-libata-92_3.00.9.2-16vmw.670.0.0.8169922, VMW_bootbank_ata-pata-amd_0.3.10-3vmw.670.0.0.8169922, VMW_bootbank_ata-pata-atiixp_0.4.6-4vmw.670.0.0.8169922, VMW_bootbank_ata-pata-cmd64x_0.2.5-3vmw.670.0.0.8169922, VMW_bootbank_ata-pata-hpt3x2n_0.3.4-3vmw.670.0.0.8169922, VMW_bootbank_ata-pata-pdc2027x_1.0-3vmw.670.0.0.8169922, VMW_bootbank_ata-pata-serverworks_0.4.3-3vmw.670.0.0.8169922, VMW_bootbank_ata-pata-sil680_0.4.8-3vmw.670.0.0.8169922, VMW_bootbank_ata-pata-via_0.3.3-2vmw.670.0.0.8169922, VMW_bootbank_block-cciss_3.6.14-10vmw.670.0.0.8169922, VMW_bootbank_bnxtnet_20.6.101.7-11vmw.670.0.0.8169922, VMW_bootbank_brcmfcoe_11.4.1078.0-8vmw.670.0.0.8169922, VMW_bootbank_char-random_1.0-3vmw.670.0.0.8169922, VMW_bootbank_ehci-ehci-hcd_1.0-4vmw.670.0.0.8169922, VMW_bootbank_elxiscsi_11.4.1174.0-2vmw.670.0.0.8169922, VMW_bootbank_elxnet_11.4.1094.0-5vmw.670.0.0.8169922, VMW_bootbank_hid-hid_1.0-3vmw.670.0.0.8169922, VMW_bootbank_i40en_1.3.1-18vmw.670.0.0.8169922, VMW_bootbank_iavmd_1.2.0.1011-2vmw.670.0.0.8169922, VMW_bootbank_igbn_0.1.0.0-15vmw.670.0.0.8169922, VMW_bootbank_ima-qla4xxx_2.02.18-1vmw.670.0.0.8169922, VMW_bootbank_ipmi-ipmi-devintf_39.1-4vmw.670.0.0.8169922, VMW_bootbank_ipmi-ipmi-msghandler_39.1-4vmw.670.0.0.8169922, VMW_bootbank_ipmi-ipmi-si-drv_39.1-4vmw.670.0.0.8169922, VMW_bootbank_iser_1.0.0.0-1vmw.670.0.0.8169922, VMW_bootbank_ixgben_1.4.1-11vmw.670.0.0.8169922, VMW_bootbank_lpfc_11.4.33.1-6vmw.670.0.0.8169922, VMW_bootbank_lpnic_11.4.59.0-1vmw.670.0.0.8169922, VMW_bootbank_lsi-mr3_7.702.13.00-4vmw.670.0.0.8169922, VMW_bootbank_lsi-msgpt2_20.00.04.00-4vmw.670.0.0.8169922, VMW_bootbank_lsi-msgpt35_03.00.01.00-10vmw.670.0.0.8169922, VMW_bootbank_lsi-msgpt3_16.00.01.00-1vmw.670.0.0.8169922, VMW_bootbank_misc-cnic-register_1.78.75.v60.7-1vmw.670.0.0.8169922, VMW_bootbank_misc-drivers_6.7.0-0.0.8169922, VMW_bootbank_mtip32xx-native_3.9.6-1vmw.670.0.0.8169922, VMW_bootbank_ne1000_0.8.3-4vmw.670.0.0.8169922, VMW_bootbank_nenic_1.0.11.0-1vmw.670.0.0.8169922, VMW_bootbank_net-bnx2_2.2.4f.v60.10-2vmw.670.0.0.8169922, VMW_bootbank_net-bnx2x_1.78.80.v60.12-2vmw.670.0.0.8169922, VMW_bootbank_net-cdc-ether_1.0-3vmw.670.0.0.8169922, VMW_bootbank_net-cnic_1.78.76.v60.13-2vmw.670.0.0.8169922, VMW_bootbank_net-e1000_8.0.3.1-5vmw.670.0.0.8169922, VMW_bootbank_net-e1000e_3.2.2.1-2vmw.670.0.0.8169922, VMW_bootbank_net-enic_2.1.2.38-2vmw.670.0.0.8169922, VMW_bootbank_net-fcoe_1.0.29.9.3-7vmw.670.0.0.8169922, VMW_bootbank_net-forcedeth_0.61-2vmw.670.0.0.8169922, VMW_bootbank_net-igb_5.0.5.1.1-5vmw.670.0.0.8169922, VMW_bootbank_net-ixgbe_3.7.13.7.14iov-20vmw.670.0.0.8169922, VMW_bootbank_net-libfcoe-92_1.0.24.9.4-8vmw.670.0.0.8169922, VMW_bootbank_net-mlx4-core_1.9.7.0-1vmw.670.0.0.8169922, VMW_bootbank_net-mlx4-en_1.9.7.0-1vmw.670.0.0.8169922, VMW_bootbank_net-nx-nic_5.0.621-5vmw.670.0.0.8169922, VMW_bootbank_net-tg3_3.131d.v60.4-2vmw.670.0.0.8169922, VMW_bootbank_net-usbnet_1.0-3vmw.670.0.0.8169922, VMW_bootbank_net-vmxnet3_1.1.3.0-3vmw.670.0.0.8169922, VMW_bootbank_nhpsa_2.0.22-1vmw.670.0.0.8169922, VMW_bootbank_nmlx4-core_3.17.9.12-1vmw.670.0.0.8169922, VMW_bootbank_nmlx4-en_3.17.9.12-1vmw.670.0.0.8169922, VMW_bootbank_nmlx4-rdma_3.17.9.12-1vmw.670.0.0.8169922, VMW_bootbank_nmlx5-core_4.17.9.12-1vmw.670.0.0.8169922, VMW_bootbank_nmlx5-rdma_4.17.9.12-1vmw.670.0.0.8169922, VMW_bootbank_ntg3_4.1.3.0-1vmw.670.0.0.8169922, VMW_bootbank_nvme_1.2.1.34-1vmw.670.0.0.8169922, VMW_bootbank_nvmxnet3-ens_2.0.0.21-1vmw.670.0.0.8169922, VMW_bootbank_nvmxnet3_2.0.0.27-1vmw.670.0.0.8169922, VMW_bootbank_ohci-usb-ohci_1.0-3vmw.670.0.0.8169922, VMW_bootbank_pvscsi_0.1-2vmw.670.0.0.8169922, VMW_bootbank_qcnic_1.0.2.0.4-1vmw.670.0.0.8169922, VMW_bootbank_qedentv_2.0.6.4-8vmw.670.0.0.8169922, VMW_bootbank_qfle3_1.0.50.11-9vmw.670.0.0.8169922, VMW_bootbank_qfle3f_1.0.25.0.2-14vmw.670.0.0.8169922, VMW_bootbank_qfle3i_1.0.2.3.9-3vmw.670.0.0.8169922, VMW_bootbank_qflge_1.1.0.11-1vmw.670.0.0.8169922, VMW_bootbank_sata-ahci_3.0-26vmw.670.0.0.8169922, VMW_bootbank_sata-ata-piix_2.12-10vmw.670.0.0.8169922, VMW_bootbank_sata-sata-nv_3.5-4vmw.670.0.0.8169922, VMW_bootbank_sata-sata-promise_2.12-3vmw.670.0.0.8169922, VMW_bootbank_sata-sata-sil24_1.1-1vmw.670.0.0.8169922, VMW_bootbank_sata-sata-sil_2.3-4vmw.670.0.0.8169922, VMW_bootbank_sata-sata-svw_2.3-3vmw.670.0.0.8169922, VMW_bootbank_scsi-aacraid_1.1.5.1-9vmw.670.0.0.8169922, VMW_bootbank_scsi-adp94xx_1.0.8.12-6vmw.670.0.0.8169922, VMW_bootbank_scsi-aic79xx_3.1-6vmw.670.0.0.8169922, VMW_bootbank_scsi-bnx2fc_1.78.78.v60.8-1vmw.670.0.0.8169922, VMW_bootbank_scsi-bnx2i_2.78.76.v60.8-1vmw.670.0.0.8169922, VMW_bootbank_scsi-fnic_1.5.0.45-3vmw.670.0.0.8169922, VMW_bootbank_scsi-hpsa_6.0.0.84-3vmw.670.0.0.8169922, VMW_bootbank_scsi-ips_7.12.05-4vmw.670.0.0.8169922, VMW_bootbank_scsi-iscsi-linux-92_1.0.0.2-3vmw.670.0.0.8169922, VMW_bootbank_scsi-libfc-92_1.0.40.9.3-5vmw.670.0.0.8169922, VMW_bootbank_scsi-megaraid-mbox_2.20.5.1-6vmw.670.0.0.8169922, VMW_bootbank_scsi-megaraid-sas_6.603.55.00-2vmw.670.0.0.8169922, VMW_bootbank_scsi-megaraid2_2.00.4-9vmw.670.0.0.8169922, VMW_bootbank_scsi-mpt2sas_19.00.00.00-2vmw.670.0.0.8169922, VMW_bootbank_scsi-mptsas_4.23.01.00-10vmw.670.0.0.8169922, VMW_bootbank_scsi-mptspi_4.23.01.00-10vmw.670.0.0.8169922, VMW_bootbank_scsi-qla4xxx_5.01.03.2-7vmw.670.0.0.8169922, VMW_bootbank_shim-iscsi-linux-9-2-1-0_6.7.0-0.0.8169922, VMW_bootbank_shim-iscsi-linux-9-2-2-0_6.7.0-0.0.8169922, VMW_bootbank_shim-libata-9-2-1-0_6.7.0-0.0.8169922, VMW_bootbank_shim-libata-9-2-2-0_6.7.0-0.0.8169922, VMW_bootbank_shim-libfc-9-2-1-0_6.7.0-0.0.8169922, VMW_bootbank_shim-libfc-9-2-2-0_6.7.0-0.0.8169922, VMW_bootbank_shim-libfcoe-9-2-1-0_6.7.0-0.0.8169922, VMW_bootbank_shim-libfcoe-9-2-2-0_6.7.0-0.0.8169922, VMW_bootbank_shim-vmklinux-9-2-1-0_6.7.0-0.0.8169922, VMW_bootbank_shim-vmklinux-9-2-2-0_6.7.0-0.0.8169922, VMW_bootbank_shim-vmklinux-9-2-3-0_6.7.0-0.0.8169922, VMW_bootbank_smartpqi_1.0.1.553-10vmw.670.0.0.8169922, VMW_bootbank_uhci-usb-uhci_1.0-3vmw.670.0.0.8169922, VMW_bootbank_usb-storage-usb-storage_1.0-3vmw.670.0.0.8169922, VMW_bootbank_usbcore-usb_1.0-3vmw.670.0.0.8169922, VMW_bootbank_vmkata_0.1-1vmw.670.0.0.8169922, VMW_bootbank_vmkfcoe_1.0.0.0-1vmw.670.0.0.8169922, VMW_bootbank_vmkplexer-vmkplexer_6.7.0-0.0.8169922, VMW_bootbank_vmkusb_0.1-1vmw.670.0.0.8169922, VMW_bootbank_vmw-ahci_1.2.0-6vmw.670.0.0.8169922, VMW_bootbank_xhci-xhci_1.0-3vmw.670.0.0.8169922, VMware_bootbank_cpu-microcode_6.7.0-0.0.8169922, VMware_bootbank_elx-esx-libelxima.so_11.4.1184.0-0.0.8169922, VMware_bootbank_esx-base_6.7.0-0.0.8169922, VMware_bootbank_esx-dvfilter-generic-fastpath_6.7.0-0.0.8169922, VMware_bootbank_esx-ui_1.25.0-7872652, VMware_bootbank_esx-xserver_6.7.0-0.0.8169922, VMware_bootbank_lsu-hp-hpsa-plugin_2.0.0-13vmw.670.0.0.8169922, VMware_bootbank_lsu-lsi-lsi-mr3-plugin_1.0.0-12vmw.670.0.0.8169922, VMware_bootbank_lsu-lsi-lsi-msgpt3-plugin_1.0.0-8vmw.670.0.0.8169922, VMware_bootbank_lsu-lsi-megaraid-sas-plugin_1.0.0-9vmw.670.0.0.8169922, VMware_bootbank_lsu-lsi-mpt2sas-plugin_2.0.0-7vmw.670.0.0.8169922, VMware_bootbank_native-misc-drivers_6.7.0-0.0.8169922, VMware_bootbank_qlnativefc_3.0.1.0-5vmw.670.0.0.8169922, VMware_bootbank_rste_2.0.2.0088-7vmw.670.0.0.8169922, VMware_bootbank_vmware-esx-esxcli-nvme-plugin_1.2.0.32-0.0.8169922, VMware_bootbank_vsan_6.7.0-0.0.8169922, VMware_bootbank_vsanhealth_6.7.0-0.0.8169922, VMware_locker_tools-light_10.2.0.7253323-8169922
    [root@xd-1541-5028d:~]

    All Comments on This Article (62)

    I tried the VUM approach following the instructions on vmiss.net but got stuck because I have a single host with VCSA running on it.

    Is there any way to use VUM with a single host with VCSA running on it?

    If no, it would be really nice if this was prominently mentioned in the VUM articles. Or maybe I missed it?

    Awesome, thank you for the kind words!

    This script is fantastic! So as a novice playing in a home lab I had originally gone to install 6.7 to find the version no longer officially supported a number of pieces of hardware in my rig; namely my Xeon E5620 CPUs in my R710 (along with other issues; I think either my RAID controller or NICs). So installing 6.5 and loading drivers as errors arose was my first go round.

    After playing around with KVM installs and other fun things I came back to VMware and installed the latest Dell custom image for the R710, which is dated to 6.0 (which comes packaged with necessary drivers), and then running the above allows for the bypassing of install errors of "unsupported" Xeon processors to play with the latest vSphere client (reading through Reddit folks seem to have found hit and miss success with the 56xx chips, while complete failure with 55xx).

    All that to say, the simple steps above are amazing. Simple, elegant, and bypass a number of headaches (e.g. expired VMware trials, unsupported hardware) that otherwise are a pain to attack from any other angle. Thanks a bunch for this!

    Sorry I forgot to reply to your great feedback. You were right, this does happen, ran into it with 6.7U2 recently:
    https://TinkerTry.com/easy-update-to-latest-esxi#upgrade-log
    and still no elegant work-around, hmmm.

    I'm so sorry, this comment slipped through the cracks, and I'm just now circling back to see if you ever found a resolution to this? Yeah, the only free way to give comments on VMUG Advantage EVALExperience versions of vSphere is the little smile face to the top right of the vSphere Client UI, that is, if you have that VCSA appliance installed.

    Thanks, Paul. Network guy here, broadening my skill base and loaded up an ESXi lab on a cheap Dell R410 w/6.5 initially from an old image I had lying around and installed the free license to play with. Upgrade ran great, and all seems well on 6.7 with my Ubuntu, RHEL 7, and GNS3 VMs. Appreciate the basic command-line simplicity!

    May I ask what kind of system, and is everything still working ok? I guess I'm not asking if I can ask, I'm asking whether you're willing to answer ;) Just being a little silly, long day...

    Awesome to hear that, thank you for sharing!

    Worked perfectly. Updated from 6.5 to 6.7 in about 5 minutes on my test host. I even was too lazy to remember about shutting the VM's down and going into maintenance mode but it still worked fine.



    After update and reboot my Windows 10 VM and Ubuntu 18.04 server VM were fine.

    After struggling with a "[Errno 28] No space left on device" on a fresh vmware install, I found the following solution. Please consider adding it to your excellent guide - https://communities.vmware.com/thread/595325 - enter the host webui and Go to Host > System > Swap and activate swap on our datastore vmfs.

    after having a "no space left on device" on a fresh vmware install, I found this solution. Please consider adding it to your very good guide, https://communities.vmware.com/thread/595325

    Sorry to hear about this issue, and for my slow response. Any luck poking around VMTN forums or elsewhere on the web?

    I'm now in Update 1 and the problem persists. Not really sure what's going on.

    I've not noticed this, but I'm now on 6.7 Update 1:
    https://TinkerTry.com/easy-update-to-latest-esxi

    I've updated to 6.7.0 (Build 8169922). Anyone else experiencing problems with the 'Monitor' ? Despite the fact I select "Last hour" I can only get like 5m worth of stats (max).

    enter to the UI, go to host, manage and enable datastore swap, then issue the command again, disable swap after if you want...

    Thanks, I'll check out that new article of yours.

    Regarding the USB, It doesn't seem to be "Failed" exactly, just seems like an update may have messed things up like you described here.

    "if people are running on SD cards or USB sticks and they haven't rebooted the server in a LONG time to patch/update, I would strongly recommend doing a reboot of the server before applying any updates. I've seen, more than once, the SD card or the controller goes into some funky state and as ESXi is running largely in memory, it can comes up half patched or not patched at all. A [cold] reboot before update helps with that (again, if a server has been running for a long period of time - like a year+ - since it was rebooted last). Cold (remove the power cables) can be important, if the SD card or USB stick is actually running on an embedded controller like iLO or iDRAC."

    Mostly curious to find a way to see if this was in-fact the case more info on this phenomenon. May help in future recovery/prevention.
    Thanks again, much appreciated.

    I believe my newer article will be of help
    https://TinkerTry.com/easy-update-to-latest-esxi
    including my recommended way to back up, let me know what you think. I would avoid using that failed USB drive for anything, not worth the trouble, see:
    https://TinkerTry.com/nice-little-usb-flash-drive-choice-for-that-esxi-in-your-home-lab
    I hope this helps?

    Great article, I do a very similar procedure at my office. I believe I may have fallen into the pitfall you mentioned regarding updating ESXi running from a USB.
    One evening, after a power failure, I was trying to boot ESXi and it would pink-screen me. Inspected the USB and it had a bunch of extra partitions, directories, and files.

    Do you have any additional sources or suggestions like the one in "Step 3 - Perform Upgrade using ESXCLI". It would be nice to document exactly what went wrong.

    Also, what, in your opinion, would be the best way of maintaining an up-to-date backup ESXi USB? Just in case something like this occurs again I could just swap it out.
    Thanks

    Yeah, that's looking to be too old for ESXi 6.0, 6.5, and 6.7, see:
    https://www.vmware.com/resources/compatibility/detail.php?deviceCategory=server&productid=3974&deviceCategory=server&details=1&partner=41&keyword=%22DL380%20G6%22&page=1&display_interval=10&sortColumn=Partner&sortOrder=Asc
    That doesn't mean it won't work, but I can't say for sure, as I don't have any first-hand knowledge of that system.

    You might consider using USB for ESXi to give it a try, and if you're already using USB, here's one way to revert should things go wrong after the upgrade to 6.7:
    https://TinkerTry.com/clone-esxi-with-usb-image-tool
    which is especially important since you won't have official support.

    Hi Paul. I have @ home 2 X HP
    Model ProLiant DL380 G6
    CPU 8 CPUs x Intel(R) Xeon(R) CPU E5540 @ 2.53GHz

    There in a Cluster with a NAS MSA2012 (12Disk raid5).

    I was wondering if the upgrade is going to work for this hardware. Do you know if i its supported?
    Because i have a cluster i can go back without downtime etc so not a to big of a problem if it fails but maybe you have the answer. Regards, Tim

    O Btw amazing detailed post! thx for that.

    Excellent to hear that, enjoy your weekend of tinkering with ESXi 6.7!

    Glad to see that workaround works, but it is sad that this seems to be a real bug. Hopefully, VMware/Supermicro fixes this in next driver/esxi release. Agree it is important to report it.

    I would very much appreciate it if you would report it to Supermicro too:
    https://www.supermicro.com/24Hour/24hour.cfm
    It's always much more effective for an owner of the affected product to get in touch with them directly, from me, it's just another second-hand unsubstantiated story. Thank you for all your time on this, Alessandro!

    Great that you blogged about that, hopefully that can help others too. I see that you've shared the info with Supermicro directly... Do you think it would help for me to open a bug with VMWare too, or are you taking care of this on your side?

    Thanks again for the help!

    Thank you for the details! I've added highlights to my X557 article here:
    https://TinkerTry.com/how-to-work-around-intermittent-intel-x557-network-outages-on-12-core-xeon-d#may-06-2018-update

    Paul, thanks for sharing the last link, it seems exactly the same issue I had. Sorry for not noticing it earlier.

    I have the same hardware Brian has, and, like him, I'm using gigabit network switches. Changing the configuration so speed is set to 1000 seems to have fixed the issue (and it survived a reboot). I had to do it using the ESXi Shell via IPMI (Alt + F1), which is quite awkward but worked. Sounds like it's a real bug.

    Now, I just need to re-configure ESXi based on my docs, since I had to do a fresh-install (ouch).

    PS: I did take the power off (and kept it off for a bit) twice, and it didn't work. I am already on the last BIOS (1.3) too.

    It's a long shot, but have you de-energized your motherboard completely at any point during your troubleshooting (removed power cord), and did that temporarily bring back your X557 connections after pluging in, powering up, and letting ESXi finish booting up? See why I ask at:
    https://TinkerTry.com/how-to-work-around-intermittent-intel-x557-network-outages-on-12-core-xeon-d

    For those following along, here's Alessandro's motherboad:
    http://www.supermicro.com/products/motherboard/xeon/d/X10SDV-4C-TLN2F.cfm
    which indeed only has two 10GbE Intel X557 based ports, similar to what HP did with their Xeon D:
    https://TinkerTry.com/xeon-d-landscape-2017#hpe

    Also worth checking out what Bruno wrote here:
    https://TinkerTry.com/how-to-install-esxi-on-xeon-d-1500-supermicro-superserver#comment-3870383122

    Hi Paul, thanks for responding so fast!

    My motherboard only came with two 10gbe X557, and I don’t have any gigabit Ethernet. This has made things a bit challenging, but I’m able to run commands via the console using IPMI. I typed the model wrong, as it’s actually a X10SDV-4C-TLN2F (Xeon D-1521).

    I had tried reinstalling the VIB from Intel, but it still doesn’t work. ESXi recognizes the network adapters, but they’re reported as “down” all of the time. The cable is connected and the light is on.

    I have tried simply re-installing 6.7, following your guide for a fresh install, wondering if my specific installation was corrupted... that again did not fix the issue.

    I’m starting to wonder if there’s a bug preventing X557 network cards to work with 6.7 as management NICs? Although they are certified by VMWare to work, and the VIB lists 6.7 as supported.

    Hello Alessandro, sorry for the issue, seems some folks get an easy upgrade, others encounter some issues, which is why I'll need to add even greater emphasis to the recommended backup procedures in this article, especially for folks that don't have VMware support, and since there is no roll-back for ESXCLI style of upgrades (I used the word revert above, I'll add roll-back so it's easier to find that warning). Your data is safe though, that's not the issue, it's just that troubleshooting can be time consuming.
    Let me start with (best effort help here), which network ports you're having issues with. Did your 1GbE Intel I350 ports go down, or your 10GbE Intel X557 ports go down? Knowing which is essential for me to form a response. You wrote IXGE, but if that's just a typo and maybe you meant IXGBE (10GbE X557)
    https://TinkerTry.com/how-to-check-which-network-driver-your-esxi-server-is-currently-using
    If that's the case, that's likely pretty easy to remedy, especially if you still have internet connectivity to your ESXi host through the working 1GbE, see fix at:
    https://TinkerTry.com/how-to-install-intel-x552-vib-on-esxi-6-on-superserver-5028d-tn4t

    If troubleshooting networking ends up being needed, that can be tricky, but know that even if that fails, a reinstall of 6.5 is an option, knowing that your VMFS and VMs that are on them will remain intact, but you'll then have to configure all your other settings and bring your VMs back into inventory.

    I'm working on an article about step--by-step procedures that will help everybody backup ESXi if they're on USB or SD cards, for free, using a combination of http://www.alexpage.de/usb-image-tool/ and Rufus
    https://TinkerTry.com/rufus-lets-you-quickly-and-easily-reformat-an-esxi-usb-flash-drive-back-to-full-capacity
    in case an old ESXi USB key is being used and needs to be wiped first before being re-used for backup duties.

    I have done this on my Supermicro X10SDV-TLN4F (Xeon D-1541), and I've completely lost networking. I have re-installed the VIB for IXGE, as per the guide for 6.5, but I still can't get any networking to work. My host recognizes the two NICs, but it says that they're in down state. I am not sure what else to try, nor how to rollback.

    It worked for me great, didn't even require a reboot. Thanks so much for the link!

    Thank you! Having an issue with a different server/USB drive...

    Sorry, somehow the link I pasted in the moderator panel didn’t make it, here’s the intended link that might help:
    https://TinkerTry.com/easy-upgrade-to-esxi-65u1ep04#comment-3597014113

    Hi -- I don't see the report (?) Was there supposed to be a link?

    This KB article helped me: https://kb.vmware.com/s/article/2004784

    In a nutshell, I changed boot.cfg (backing up original for safe keeping):

    kernelopt=no-auto-partition

    to

    kernelopt=autoPartition=TRUE skipPartitioningSsds=TRUE autoPartitionCreateUSBCoreDumpPartition=TRUE
    (all in one line)

    Which allowed me to have more space on the USB drive for persistent storage.

    The upgrade worked well, which is great because a fresh installation wasn't allowing me to passthrough devices or even save settings (!) and the partition flags weren't working, either.

    Thanks for the guide!

    Perhaps this tip from a similar report 6 months ago is helpful?

    [root@esxi:~] esxcli software profile update -d https://hostupdate.vmware.com/so
    ftware/VUM/PRODUCTION/main/vmw-depot-index.xml -p ESXi-6.7.0-8169922-standard
    [InstallationError]
    [Errno 28] No space left on device
    vibs = VMware_locker_tools-light_10.2.0.7253323-8169922
    Please refer to the log file for more details.

    Bummer. Looks like I should have bought bigger USB drives... (16GB here).

    @tinkererguy:disqus

    Update. I have good and bad news.

    The Good: You can pass the USB RocketU to a windows 10 VM in ESXi 6.7
    The Bad: This may need a total reinstallation of a VM.

    I got suspicious when my Linux VM had no problem with usb passthrough and no VMtools installed, and the Win10 VM with VMtools would boot with blackscreen.

    I had to recreate a VM and reinstall Windows 10, and test passthrough without installing the VM tools. It WORKS !!!!

    As far as I can tell, installing the VMtools kill the USB passthrough.

    I totally missed a key point here, in the end, regardless of the update method, we should reach the same results:
    https://TinkerTry.com/easy-update-to-esxi-67#reference
    See also VMware docs on the many update methods here:
    https://TinkerTry.com/easy-update-to-esxi-67#see-also

    Thanks for taking the time to report success here, Eddie! I agree, 6.7 seems very easy on Xeon D-1500, thanks for mentioning this alternative method, which not always available for free hypervisor owners who are sometimes stuck at back level iso.
    See also fresh install with just baked in drivers at:
    How to easily install VMware ESXi 6.7 on an Intel Xeon D Supermicro SuperServer
    https://TinkerTry.com/how-to-install-esxi-on-xeon-d-1500-supermicro-superserver

    I upgradet my Supermicro D1557 from the latest 6.5 U1 to 6,7 via ISO and then made the choice "upgrade". That worked absolutely fine for me. There was no need to make anything particular for my RJ45-NICs 1GB and 10GB. All four links were running from the very beginning and also after some tests I saw no problem. So I ask me if it's really necessary to add the drivers by using the ESXCLI Methode. Or is there anything else via ISO upgrade?

    Thank you for reporting this here! Disappointing to hear, given an earlier RocketU card worked during my tests with Core i7 quite a while back https://TinkerTry.com/usb3passthru but this stuff is so system specific and hit or miss...

    I tried and it doesn't even boot to the bios or EFI....

    You just might need to take a trip into the VM's BIOS after passing through the HPT controller, just in case the boot order is being altered. Probably a long shot but perhaps worth a look.

    I cannot seem to pass the PCI USB controllers to windows 10.
    I add the PCI device but the Win10 VM doesn't boot and screen is black.

    @ Paul,

    I have an high Point 4-Port USB 3.0 PCI-Express 2.0 x 4 HBA RocketU 1144D, which doesn't seem to work for a Win10 VM with ESXI 6.7

    Some update, created brand new VM with windows 10, installed updated tools (have bug in former vm, cannot upgrade them). Graphic card passthrough worked but only after I upgraded VM to HW14.

    The windows 10 VM boots if I add no passthrough. I see a bug as I cannot seem to be able to upgrade the vmware tools. Let's see if I have the same on a newly created VM.

    Ok some update, my linux virtual machine sees all of the devices from passthrough when added to the linux virtual machine. I will investigate what is going on with the windows 10 virtual machine.

    I just upgraded from 6.5, no issues during the upgrade, but my graphic card and usb card do not seem to work with the windows 10 virtual machine I had (it was HW10). I upgraded to HW14 but when I add anything for passthrough, I get a black screen from windows 10, and it doesn't seem to start up.

    I'm just glad you're all set now. tough to know what went wrong here with ESXCLI approach, perhaps the system hadn't been rebooted in a long time? I've heard of issues with that, which is why VUM method reboots the host before applying the changes.
    Also, not that you asked, but I'll throw it out there, that installing to SATADOM or USB tends to be more convenient, and makes it easy and safe to keep multiple versions around, for about $12 each for USB
    https://TinkerTry.com/nice-little-usb-flash-drive-choice-for-that-esxi-in-your-home-lab
    This also leaves your precious NVMe space for just one big VMFS, with zero impact that I've been able to measure in boot time, and certainly no impact to the speed of VMs, since it's all loaded into RAM anyway. See also
    https://blogs.vmware.com/virtualblocks/2016/03/18/virtual-san-design-considerations-for-booting-from-a-flash-device/
    http://thenicholson.com/using-sd-cards-embedded-esxi-vsan/
    https://kb.vmware.com/s/article/2145210
    In the server room, dual RAID1 mirror of M.2 AHCI SSDs are the way forward since last summer, see example of Dell EMC BOSS http://en.community.dell.com/techcenter/b/techcenter/archive/2018/03/29/operating-system-support-for-boss-boot-optimized-storage-solution-device

    I could not wait longer to troubleshoot so I fixed it by reinstalling ESX 6.7 from the ISO and everything worked. Apologies but needed the workloads online.

    when I click on that link, it takes me the following screen. https://uploads.disquscdn.com/images/31830f12ce33b1585a599079121334bdfd761fad89784e1e4cb4138a193ecf25.png

    and another article about space:
    https://www.virtualmvp.com/vsphere-6-5-transport-vmdb-error-45-failed-to-connect-to-peer-process/
    so if you're installing on USB, I'd suggest wiping all partitions before starting over, see easy and free GUI method here:
    https://TinkerTry.com/rufus-lets-you-quickly-and-easily-reformat-an-esxi-usb-flash-drive-back-to-full-capacity

    Also a long shot, but a look around in the BIOS settings might also be in order:
    https://www.reddit.com/r/vmware/comments/4s67yw/vmdb_error_45/
    Are you set up exactly like this (with UEFI on)?
    https://TinkerTry.com/recommended-bios-settings-supermicro-superserver-sys-5028d-tn4t

    I found this older (ESXi 5.5) article about ramdisk being full, have a look:
    https://communities.vmware.com/thread/469469
    Can you elaborate a bit on what exact type of media did you install ESXi on, and what capacity is that device?

    Excellent, thank you shmookles for the details. To help search can pick this conversation up, here's the exact error you're seeing:
    "Failed to power on virtual machine xxx. Transport (VMDB) error -45: Failed to connect to peer process. Click here for more details."
    When you click on "click here" does it hyperlink to a URL you can share here? As you can suspect, I have not run into this error myself. Thanks again for taking the time to answer my many questions!

    1) VMware 13, but I upgraded them to 14 thinking that was the issue, no such luck.
    2) VMware ESXI 6.5 Update 1 Build 7967591
    3) No Sorry, never had issues in the past so didn't think to.
    4) Nothing special.
    5) Nothing special. On VMFS 6
    6) Newly created machines do not power on either.
    7) stand-alone ESXi
    8) can try to later
    9) yes supported hardware similar to bundle 1 but with a different case using X10SDV-TLN4F-O Supermicro board. I do have the ability to open an SR if needed.

    For folks following along trying to gauge whether this one report means there is a serious problem or not, please keep some perspective, folks succeeding tend to not leave comments, and only 1 in 2000 readers on average leave any comments at all. This is just one data point.

    Thank you for the information and screenshot, shmookies! If I could ask for a little more information, it could be helpful when searching around (VMTN forums, etc):
    1) What Virtual Machine hardware version were your VMs running?
    2) What ESXi 6.5 Build were you on before the upgrade?
    3) Did you keep a copy and paste of your upgade (ESXCLI results in SSH session)?
    4) Anything special, like pass through of GPUs or SR-IOV enabled for those problematic VMs? I realize that's a long shot, as you say all VMs, just double-checking.
    5) Anything unusual, like old VMFS version, or are your VMFS datastores at 6.0?
    6) Are you able to create a new VM that powers on?
    7) Do you have EVS turned on? https://kb.vmware.com/s/article/1003212, or is this a stand-alone ESXi 6.7 host?
    8) Have you another USB drive laying around, to try a fresh 6.7 install, and see if it behaves the same way? See also my successuf simple ESXi 6.7 fresh install walk-through at https://TinkerTry.com/how-to-install-esxi-on-xeon-d-1500-supermicro-superserver
    9) Is it supported hardware, and do you have the ability to open an SR# with VMware if needed (I'm just doing best-effort help here, ultimately best if VMware support becomes aware of this issue, and quite possibly in the coming days Google search and VMTN forums searches will start to yield some relevant hits too).

    After doing the ESXCLI Update from 6.5 to 6.7, no VM's will power on. No error on update either. See screenshot. https://uploads.disquscdn.com/images/270825f66aa9e2a3923fa46b4b843358191b3fbdae674191dea2f7cb5e065597.png

    After ESXCLI update to 6.7 getting a strange error of Failed to power on virtual machine hass. Transport (VMDB) error -45: Failed to connect to peer process. Click here for more details. After update, no VM's are able to power on and all state this error.