How to check that your TCP Segmentation Offload is turned on in both your VMware ESXi server and your VM

Posted by Paul Braren on Oct 29 2017 (updated on Oct 31 2017) in
  • HowTo
  • Network
  • Virtualization
  • 1540873064
    Available on Amazon in Kindle and Paperback.

    It's likely you've heard many folks rave about Frank Denneman and Niels Hagoort's VMware vSphere 6.5 Host Resources Deep Dive book (on Amazon in Kindle and Paperback). I can only imagine the amount of time and dedication it took for them to get this published, commendable and inspiring.

    Let's take a closer look at applying one of the topics that Niels Hagoort recently blogged about:

    • TCP Segmentation Offload in ESXi explained
      Oct 19 2017 by Niels Hagoort at

      When the NIC supports TSO, it will handle the segmentation instead of the host OS itself. The advantage being that the CPU can present up to 64 KB of data to the NIC in a single transmit-request, resulting in less cycles being burned to segment the network packet using the host CPU. To fully benefit from the performance enhancement, you must enable TSO along the complete data path on an ESXi host.

    I tested this out in my home lab on a set of two Xeon D servers, with the two integrated Intel I-350 1GbE ports using the igb 5.3.3 driver, and two integrated X552/X557 10GbE ports using the 4.5.2 driver.

    I'm figured you'd want to try for yourself, especially if others may have tweaked your environment, and/or you're experiencing performance problems. I've written down each step that I took, and these steps should also work fine on whatever physical gear you're running at home or at work environments. This won't be applicable to those running nested configurations, or VMware Workstation or Fusion.

    Verify TCP Segmentation Offload is on in both ESXi and VM(s)

    Step 1 - Temporarily enable SSH on your ESXi host

    Enable SSH if it isn't already running.

    Step 2 - Open an ssh session to your ESXi host

    Using your favorite ssh client such as PuTTY, login to your ESXi server as root.

    Step 3 - Check if the ESXi host has TSO Offload enabled

    This is a set of 2 simple ESXCLI Commands from Niels' article, just cut-and-paste into your ssh session, one line at a time:

    esxcli network nic tso get
    esxcli system settings advanced list -o /Net/UseHwTSO

    Step 4 - Analyze ESXCLI results

    This is good, "on" for all NICs and "1" for your Default Int Value is what you want to see.

    Based on Niels Hagoort's article, in my environment, here's what we see in the screenshot above:

    1. the first red box in the screenshot shows on for all 4 NICs, Niels explains that this confirms:

      TSO is enabled for all available pNICs or vmnics

    2. the second box shows a 1, which verifies:

      TSO is active within the VMkernel layer.

    If all of yours are 1, skip ahead to Step 5.
    If some of yours are 0, you can determine which adapter that is, just use this additional ESXCLI command:

    esxcli network nic list
    Click here to see screenshot of "esxcli network nic list" on Xeon D Supermicro SuperServer.

    For me, this shows that vmnic0 and vmnic1 are 1GbE (I-350) with igbn driver for I350, and vmnic2 and vmnic3 are 10GbE with ixgbe driver for X557.

    If you need to know what exact NIC driver version is loaded, this should work broadly, since it looks for all drivers with gb in the name:

    esxcli software vib list | grep gb

    then just search for the names from the Driver column in the previous command. You'll see I've got I350 igbn and X557 ixgbe 4.5.2 INTel drivers loaded, not VMWare inbox (included) drivers.

    Step 5 - Check if a VM has TSO Offload enabled


    Niels' article details how you do this on Linux, and in my example here, I used the Windows 10 (Version 1709) GUI.

    • press Win+R to bring up the Windows run dialogue
    • type ncpa.cpl then press enter
    • double-left-click on your active Network Adapter, in VMs the name typically contains "vmxnet3"
      or whatever else your primary network interface is called such as E1000 or E1000e
    • left-click Properties, Configure...
    • left-click the Advanced tab

    Step 6 - Analyze VM TCP/IP Configuration

    • left-click Large Send Offload V2 (IPv4) and the Value: should show Enabled
    • left-click Large Send Offload V2 (IPv6) and the Value: should show Enabled
    • you can now Cancel, and Close the Window

    Step 7 - Disable SSH

    Here's how.

    You're done! Don't you feel better knowing you have your TSO Offload enabled across the whole datapath, having checked your hypervisor and your VM(s)?


    Here's a post of my entire ssh session, exactly as seen in the screenshot above:

    login as: root
    Using keyboard-interactive authentication.
    The time and date of this login have been sent to the system logs.
       All commands run on the ESXi shell are logged and may be included in
       support bundles. Do not provide passwords directly on the command line.
       Most tools can prompt for secrets or accept them from standard input.
    VMware offers supported, powerful system administration tools.  Please
    see for details.
    The ESXi Shell can be disabled by an administrative user. See the
    vSphere Security documentation for more information.
    [root@xd-1567-5028d:~] esxcli network nic tso get
    NIC     Value
    ------  -----
    vmnic0  on
    vmnic1  on
    vmnic2  on
    vmnic3  on
    [root@xd-1567-5028d:~] esxcli system settings advanced list -o /Net/UseHwTSO
       Path: /Net/UseHwTSO
       Type: integer
       Int Value: 1
       Default Int Value: 1
       Min Value: 0
       Max Value: 1
       String Value:
       Default String Value:
       Valid Characters:
       Description: When non-zero, use pNIC HW TSO offload if available

    See also at TinkerTry


    See also