Supermicro SuperServer Workstation Graphics Card selection revisited, featuring VMware ESXi 6.5 passthrough testing of HHHL PCIe GPUs

Posted by Paul Braren on Jul 19 2017 (updated on Mar 22 2020) in
  • ESXi
  • Virtualization
  • HowTo
  • HomeLab
  • GPU
  • 49 Comments

    HHHL stands for Half Height, Half Length (source). These are the type of single PCIe slot cards that fit into the Supermicro SuperServer SYS-5028D-TN4T mini-tower bundles, and are entirely PCIe bus powered. This means the watt consumption is generally under 70 watts at highest workloads, which means the overall system stays well within the capabilities of the 250 watt power supply.

    Back in March of 2017, I needed to go back to using my SuperServer Workstation as my daily driver Windows 10 workstation, in large part because of excessive video render times for 4K video production workflow. This meant it was time to revisit how well this VM worked with the latest ESXi 6.5.0d, determining whether all features and functions behaved as they did back in the ESXi 6.0 days, when I first wrote this article:

    This SuperServer Workstation hybrid combo is a niche build, and not a turn-key solution. Yes, I admit that Bundle 2 is much more popular for good reasons: it's sold without Windows 10 and without a GPU, a better choice for the vast majority of use cases. This is especially true for virtualization enthusiasts, who generally don't need or want a watt-burning GPU. But this article is about those who very much do want a compact GPU that takes up only one slot for their Workstation VM. The term Workstation comes from the use of an attached keyboard, mouse, and monitor, so your vSphere 6.5 Datacenter can also be your Windows 10 Workstation, simultaneously. See also How to locate your triple monitor (up to 2K) PC 20 feet away for less noise and more joy.

    With so many months having gone by since these SuperServer Workstations began shipping, it's high time I revisit this little screamer, and let you know what discoveries I made when moving from vSphere 6.0 to vSphere 6.5, and when determining whether any suitable replacements for the VisionTek AMD 7750 GPU card had arrived, with the promise of quiter operation.

    AMD Radeon Pro WX 4100 - Fail

    B01MQHEAYE

    Notably, I failed to get the newer and more powerful and quieter AMD Radeon Pro WX 4100 Workstation Graphics Card working properly for passthrough, when using the latest system BIOS 1.2, as explained here. Yellow bangs through the Display adapter listed in Windows Device Manager, and/or occasional PSODs. Maybe some manual tweaks to the .VMX file will do the trick someday, if somebody figures out what those tweaks are.

    NVIDIA K1200 - Fail

    B00UPHAT2C

    As for NVIDIA, well, the spiffy looking and quiet sounding K1200 was a no-go as well, exhibiting the dreaded yellow bangs through Device Manager. Yes, NVIDIA still doesn't want you to use anything but the proper NVIDIA Grid product line for vGPUs carved up among many of your VMs, explained in their video. NVIDIA has historically not been interested in allowing easy VMware ESXi passthrough for either their Quadro (workstation) or GeForce (gaming) product lines.

    VisionTek AMD 7750 - Still a win!

    B009ZQ5HW6

    The noise of the always-on fan can be reduced by creatively using a fan speed reducer from nearly any Noctua cooling fan, installing it inline. Noctua calls it a Low-Noise Adaptor (L.N.A.), sold alone as the NA-SRC10 3-Pin Low-Noise Adaptors. Clumsy to get attached firmly, but very effective once in place, with my GPU still managing to stay cool to the touch even after long benchmarks like FurMark that doled out heavy abuse.

    I'm much happier using my Xeon D for daily use when compared to any laptop, even that lovely work-issued Core i7 Dell Precision 5510 with 1TB NVMe, worth around $3000 total. Why? Because it's still just a mobile CPU, the 4 core 8MB cache i7-6820HQ. Compare that with my 16 core 18MB cache Xeon D-1567 in my SuperServer Bundle 1, as detailed in this Intel ARK comparison table. These specs really do matter. Routine content creation tasks like Camtasia 9 video renders use all those cores and now take me much less time. See also my 4K video render measurements with various core counts here:

    4K-video-render-stressing-Xeon-D-cores-by-TinkerTry

    Here's my current observations, fresh off about 3 months of heavy use.

    The good

    2017-06-13_13-46-08
    Premium CPU in Dell Precision 5510 versus Xeon D-1567 in SuperServer Bundle 2 12 Core.
    1. snappy UI, easy triple monitor support
    2. CPU speed and multitasking abilities are impressive
    3. extreme versatility compared to any laptop
    4. many drive bays for your storage needs, and a fast M.2 slot for exceptional M.2 NVMe storage performance when used as an ESXi datastore for your Windows 10 VM
    5. video render times using Camtasia 9 are greatly reduced over any laptop
    6. having great performance with assigning 20 vCPUs to this powerhouse VM I use for dozens of hours per week
    7. sound quality of my USB to Digital Coaxial and headphone jack adapter is great
    8. I've discovered that turning VGA to offboard in the BIOS, for hand-off of video from onboard VGA port to offboard GPU card, isn't necessary for stable VM operation, I may want to revisit the build procedure Wiredzone follows when prepping these systems for shipment

    The noteworthy gotchas

    Yes, full disclosure here, this is not a VMware supported way to run a VM, we already knew this. Only certain USB devices are support, and really only products like NVIDIA GRID are properly supported for use as vGPUs carved up across your most important VMs. But this is a home lab, where pushing technology forward with what's possible on budget can be fun, especially if somebody has figured out the bumps in the road before you.

    1. you need to have full daily backups of your precious VM you're using as your workstation, free and easy options include NAKIVO Backup & Replication 7 and Veeam Agent for Microsoft Windows v2.
    2. approximately every 5th reboot of the Windows 10 VM that is also my Windows 10 triple-monitor workstation, I encounter an issue with the VisionTek AMD 7750 GPU card not being passed through at all for mysterious reasons, requiring me to reboot the SuperServer's ESXi itself, using vSphere Client on another network attached PC
    3. on my attached triple monitors, I can't easily view BIOS screen and early Windows boot issues, such as BSODs, requiring me to use VMRC on another system for problem determination
    4. currently I've disabled AMD's sound over DisplayPort and HDMI, to avoid nuisance default sound reassignment to one of those devices, since I don't use my monitors for sound
    5. occasionally, 2-3x per-constant-use-hour, my mouse seems to drop some packets randomly for about 1/3 of a second, no big deal, and this isn't associated with any CPU or disk IO load
    6. can't snapshot or vMotion the VM, the same old restrictions ESXi VMs have for RDM users
    7. adding USB 3.0 devices a little clumsier than simply plugging in, need to take steps to map it to the VM as well, sometimes the VM needs to be shut down for this to work, gladly these mappings persist through reboots of VM or ESXi host, this guy has an easier way, see Running a virtual gaming rig using a Xeon D server, a GFX 750Ti, PCI passthrough and a Windows 10 VM, but it's USB 2.0 only, and only tested on ESXi 6.0
    8. can't sync iPhone with iTunes via a physical USB 3.0 connection attached to the host/server, mapped to the Win 10 VM using the ESXi UI (Apple device seen in Device Manager, but not in iTunes)
      B01LY3Y9PH
    9. avoid RDM mappings of the C: drive for this UEFI VM for much more robust booting, I'm quite happy now with thick provisioned 1.7TB virtual drive that lives on my VMFS 6.81 formatted Samsung 960 PRO 2TB M.2 NVMe SSD
    10. if you turn SR-IOV on or off in the BIOS, you'll need to reconfigure passthrough in your ESXi host, reboot, then re-add the PCI devices back to your VM settings, so they'll show up again in Device Manager as the expected 'AMD Radeon HD 7700 Series' video device and the 'AMD High Definition Audio Device', and for me, I just right-click disable the audio device, as I don't use my monitors speakers

    Screenshots

    These are the latest AMD Radeon Settings, using AMD Radeon Software Crimson Edition 17.1.1 Driver for Windows® 10 64-bit on my Windows 10 Creators Update Workstation VM.

    AMD-RADEON-SETTINGS-Overview-for-Visiontek-7750-in-Win-10-VM-under-ESXi-65-on-Supermicro-SuperServer-Bundle-1-by-TinkerTry
    AMD-RADEON-SETTINGS-Software-for-Visiontek-7750-in-Win-10-VM-under-ESXi-65-on-Supermicro-SuperServer-Bundle-1-by-TinkerTry
    AMD-RADEON-SETTINGS-Hardware-for-Visiontek-7750-in-Win-10-VM-under-ESXi-65-on-Supermicro-SuperServer-Bundle-1-by-TinkerTry

    Apr 30 2019 Update

    superserverworkstationVM

    Everything is still working just great with vSphere/ESXi 6.7 Update 2, and the latest VM version 15 for my daily driver Windows 10 Build 1809 VM that is also my workstation. When upgrading to this hardware version, I did have to configure pass through of the PCI device again for my VM, but gladly, I didn't have to fiddle with the 2 AMD PCIe devices for ESXi itself, those persisted through the upgrade.

    I also noticed that my Windows 10 Build 1809 wound up with AMD Software 18.12.2 version, with driver version 25.20.15002.58 dated 12/6/2018 working fine.

    There's an excellent article at VMware Digital Workspace Tech Zone that gets into much more detail than my article about. It includes the three types of Graphics Acceleration:

    • Virtual Shared Graphics
    • Virtual Shared Pass-Through Graphics
    • Virtual Dedicated Graphics

      Virtual Dedicated Graphics Acceleration (vDGA) technology, also known as GPU pass-through, provides each user with unrestricted, fully dedicated access to one of the host’s GPUs. Although dedicated access has some consolidation and management trade-offs, vDGA offers the highest level of performance for users with the most intensive graphics computing needs.
      The hypervisor passes the GPUs directly to individual guest virtual machines. No special drivers are required in the hypervisor. However, to enable graphics acceleration, you must install the appropriate vendor driver on each guest virtual machine. The installation procedures are the same as for physical machines. One drawback of vDGA is its lack of vMotion support.

      Supported vDGA cards in Horizon 7 version 7.x and vSphere 6.5 include

      AMD FirePro S7100X/S7150/S7150X2
      Intel Iris Pro Graphics P580/P6300
      NVIDIA Quadro M5000/P6000, Tesla M10/M60/P40
      For a list of partner servers that are compatible with specific vDGA devices, see the VMware Virtual Dedicated Graphics Acceleration (vDGA) Guide.

    superserverworkstation

    In this article, you'll also find this excellent features table (scroll down a little).

    So it's vDGA that we're using in my Bundle 2 SuperServer Workstation, so I have a look half way down the page in the section entitled Virtual Machine Settings for MxGPU and vDGA, but it seems that only steps 2 (add PCI device) and 3 (reserve all guest memory) apply to my situation, followed by the Guest Operating System section step 3a (install GPU device drivers in Windows).


    Mar 22 2020 Update

    Last month, I had a long 4K video that apparently had HVEC content in the timeline. It was time for a re-render, and it was time to see if any of these newer GPUs happened to work with VT-d passthrough under the latest VMware ESXi 6.7 Update 1. No joy, no bueno, no worky still, none of them besides the original VisionTek that I still use for my daily driver SuperServer Workstation.


    See also at TinkerTry

    Excellent comments left by Vic T here:

    Nice write-up, as always. Your site has always given me inspiration and keeping me up-to-date with what's possible in my home lab - a small-chassis x10sri-f on xeon e5 L v4, and a sys-e200-8d "frankensteined" with a 60mm fan to reduce the stock fan noise.

    Just wanted to share what I have with regard to GPU and USB3 isochronous, FWIW.

    On GPU, I managed to find an older Grid K2 card which has 2 GPUs on board - I passed through one of the GPUs to a VM for demanding tasks, and the other GPU can still accelerate other VMs vSGA (via VMware tools' 3d acceleration via Xorg on host) for lower requirements with the added advantage of being vMotion-able. The Grid K2 requires good cooling, so I ended up having to add a few more fans and so far the noise has been bearable. As opposed to the newer Grid, the K2 doesn't require the newer Nvidia software licensing which can get very expensive.

    On the USB, I've tried 3 USB-to-IP devices (yeah, part of work eval to passthrough USB-to-serial console and Rainbow tokenkeys): Digi's AnywhereUSB, SEH's UTN2500 and the Silex DS600. The AnywhereUSB is USB2.0 only and doesn't support isochronous and had driver issues. So far I've been having good results with SEH and Silex, both support isochronous and managed to run a USB-based DVD drive successfully.

    tinkertry-front-view-of-home-datacenter-at-66-watts
    How to install the VisonTek 7750 4K UHD GPU card into a Supermicro SuperServer 5028D-TN4T Mini-tower.

    See also

    Suddenly, the 12 core Supermicro SuperServer isn't looking very pricey, even when fully loaded with 128GB of RAM!

    • Apple unleashes 18-core iMac Pro with 128GB RAM, bumps other Macs to Kaby Lake

      The biggest surprise was that Apple announced a new "space gray" iMac Pro that can be configured with up to an 18-core Intel Xeon CPU, 128GB of RAM, and a 4TB SSD drive. Keep in mind that the base model of this beast will feature an 8-core Xeon, 32GB of RAM, and a 1TB SSD and it will cost $4999. So, the price of the fully configured version is going to be astronomical, likely in the $8K-$10K range, when it arrives at "the end of the year."

    This excellent article details a different approach that leverages ESXi 6.0 and USB 2.0 passthrough:


    All Comments on This Article (49)

    Wow, didn’t see that resolution coming, thank you so much for sharing what happened!

    I figured out the vCPU situation. It seems when requesting 4 vCPUs for a new Windows 10 VM, the VM creation wizard defaults to creating 4 cores, each in its own socket. It isn't supposed to matter (based on what I've read) but switching it to 1 socket with 4 cores per socket solves my issue. https://uploads.disquscdn.com/images/1088e3939427aa59208eb57474a69d40a9a9a6c2735be3842c8d5b1bbde4d68f.png

    Onward!

    https://uploads.disquscdn.com/images/9a5781c92947000c8e20cc86492b5c58496e4f5307bf774352affef04eb113e7.png https://uploads.disquscdn.com/images/72647a302630ada8779f27f16aceca45831a4215ec2c1664a5cb0c3c0958d4b1.png https://uploads.disquscdn.com/images/60ed06896c16e15d878719be0bc1f41efa4626341f61eea7947ef6f4e9f912b4.png Sorry. I missed this reply earlier.

    1) Yes, I did install the datastore on my Samsung 970 M.2 NVMe SSD
    2) Datastore was formatted with VMFS6
    3) Yes I did use the (default) UEFI boot settings

    I assigned 4 vCPUs, and in Device Manager on the VM I see 4 processors, but Task Manager only shows one -- and the CPU is swamped in the VM (and things are sluggish) so it acts like it is only seeing one vCPU. I deleted and rebuilt the VM. No change in this behavior. My CentOS Linux VMs are showing all the vCPUs assigned to them. I'm more of a Linux guy than a Windows guy, so I could be missing something simple, but I checked my old vSphere 6.7 ASRock host, and when I pass through 4 vCPUs on that machine to a Windows 10 VM, all 4 show up in Task Manager.

    I've attached some screen shots.

    Thanks!

    John

    Thanks for the links. I had some USB passthrough success back in 2015 - but haven't touched it since, and have decided that, for me, makes things "too brittle." IE -- if I don't look at it for 8 months, and want to upgrade something, getting it working again is too much of a pain.

    Yes, handling pass through is more of an advanced thing, not exactly typical for those new to ESXi, you are bold! If you do decide to get a keyboard and mouse passed through, there's the hardware approach, with this USB 2.0 Digi device good for keyboard and mouse
    https://TinkerTry.com/digi-anywhereusb2-usb-connect-over-ip-to-vm
    and this USB 3.0 Silex device that's better for sound and thumb drive storage devices:
    https://TinkerTry.com/superserver-xeon-d-workstation-revisited-with-esxi-65-with-fix-for-usb-sound#jun-24-2017-update
    although sound has been tricky on ESXi 7.0 for mysterious reasons.

    If you decide to go with "hacking" ESXi itself for that one VM to claim the keyboard, note that I haven't tested this with ESXi 7.0:
    https://twitter.com/lamw/status/1260561406484602882?s=20
    and be prepared for a lot of time consuming testing. To get to DCUI after such a hack, you'd need to do it via SSH
    https://TinkerTry.com/did-you-know-you-can-get-to-the-esxi-local-console-ui-from-an-ssh-session
    and you'd need to take care to not break your network, since the keyboard will no longer work to change things in the DCUI (local ESXi UI).

    Glad you got video squared away!
    As far as ESXi 7.0 itself, installed much like my (admittedly older) 6.7 article:
    https://tinkertry.com/how-to-install-esxi-on-xeon-d-1500-supermicro-superserver
    As far as slowness of your Windows 10 VM, since it's an ESXi 7.0 VM, you don't want any drivers for the motherboard or anything.
    1) When you added an SSD, was it an M.2 NVMe SSD (such as Samsung 970 EVO) for far far better speeds than any SATA drive including SATA SSDs?
    2) When you formatted it for VMware's use, was that VMFS 6 you chose?
    3) When you created a Windows 10 VM, was it done with UEFI settings ideally, seen here: https://twitter.com/paulbraren/status/1269724429745086470?s=20
    although that shouldn't affect speed.

    After changing the VGA BIOS setting to "Offboard" and getting the AMD Radeon driver installed, the passthrough of the newer 2GB board seems to be working fine.

    And yes, I do have to do the passthrough toggle trick after restarting the server. :(. Hopefully that'll get fixed in later 7.x releases. Of course, I can't imagine passthrough is a big demand in production environments.

    I struggled for a bit with the weirdness between having both the VMware video driver AND the AMD functioning at the same time. I seem to have it in a working state now.

    One odd thing - performance is really slow in that VM. Should I be installing other drivers? Like the SuperMicro/Intel motherboard drivers? Task Manager only shows a single CPU, enough those I've allocated 4, and Device manager shows 4 processors.

    It had been a few years since I messed with Windows and passthrough. I somehow thought the physically attached keyboard and mouse would "just work." But it turns out on my older ASRock motherboard, I had passed through a separate PCI controller with those devices attached. I might look at one of the other solutions to try to get the keyboard and mouse to work. Not a high priority.

    Thanks, for your help, Paul!

    John

    Correct, no pciHole needed, AMD is easy (NIVIDIA harder) with VMware. If you’re on ESXi 7.0, you will need this article to workaround lost mapping if you reboot ESXi:
    https://TinkerTry.com/vmware-vsphere-esxi-7-gpu-passthrough-ui-bug-workaround

    Thanks for the info, Paul.

    I hadn't yet changed that VGA BIOS setting. I'll try that. I hadn't changed it as I assumed it would force all video through the PCI video card, where my goal was just to have the Windows video through that card, and all other (BIOS, VMware console, etc.) through the VGA, which is what I was doing on my older ESXi server.

    I'm not doing the keyboard and mouse mapping.

    So -- I shouldn't need any manual config edits for pciHole, etc., for the the video card?

    Thanks!

    John

    Glad that GPU install helped!
    No problem, VMXNET is great, found after VMware Tools is in, works great. Most likely the GPU isn't passing through due to the BIOS setting if you haven't done that yet, see step 15:
    https://TinkerTry.com/recommended-bios-settings-supermicro-superserver-sys-5028d-tn4t#recommended-bios-settings-for-sys-5028d-tn4t-step-by-step
    then go into Device Manager Display adapter, then disable the VMware SVGA 3D device.
    Are you doing keyboard an mouse mapping, or just GPU? That's a much more ambitious project, and I've had some issues doing it with ESXi 7.0, see also https://TinkerTry.com/superserverworkstation
    Note you won't be able to vMotion once you map any device through to a VM.

    The server arrived - I installed the 2 GB version of the video card (glad for your video -- otherwise I might have given up the physical install!). It shows up in the hardware list, and I've configured it for passthrough to a fresh Windows 10 VM, but it isn't showing up there yet. I was curious if you have installation guide for Windows 10 with the passthrough bits -- not sure if my process was ideal -- for example, I set it to use VMXNET 3, but that isn't available until VMware Tools are installed -- should I have used the default network adapter instead -- at least initially? Also -- do I need to manually edit the VM configs for any passthrough info - such as pciHole? Thanks!

    Sadly, no. They did let me know the day I ordered, that it would take a while. Of course, I was hoping things might move a little faster...

    "Your order Sxxxxxx is scheduled for assembly and testing. This requires a few days. We estimate shipping your order by 05/27/2020."

    Any updates on your order status?

    Congratulations, that’s awesome, hope it ships soon. See also https://TinkerTry.com/vmware-vsphere-esxi-7-gpu-passthrough-ui-bug-workaround for a hopefully helpful tip.

    I'll give it a shot. Still waiting on my Bundle 2 to arrive -- they must have been backordered. Looking forward to it!

    Seems likely it would work just fine, but hard to be 100% certain without trying, with things like a firmware changes that VisionTek might be shipping with these days possibly causing an issue. I wish I knew for sure, and I see the pricing is a bit lower too, if you do jump in, please let us know how it works out, and I'll go ahead and append the article accordingly.

    I see a newer version of this VisionTek AMD 7750 is available. Any reason to think it would not work in the ESXi passthrough to Windows as described above? https://www.visiontek.com/radeon-7750-sff-2gb-gddr5-2x-dp.html

    That is something I've thought about but not tested, as I don't have an eGPU, and I worry about noise a bit. Do you have a particular Thundbolt 3 card you've picked out that you're interested in? This William Lam article should give you some hope that Thunderbolt devices are just seen as if they're on the PCIe bus directly:
    https://www.virtuallyghetto.com/2015/01/thunderbolt-storage-for-esxi.html

    Does anybody have experience with using an external eGPU linked through a Thunderbolt 3 connection to the Sys-5028d-TNT4. By adding a PCIe card with Thunderbolt controller one could get around the space and power constraints from this chassis and as such get access to a whole range of GPU’s from the external housing.

    But, will ESXi (6.7) be able to passthrough this PCI device to a VM? Will the external GPU be seen as a discrete device on the PCIe bus or will the whole TB controller be seen as 1 device.

    Probably this is a very exotic question but one never knows somebody has tried this.

    Thanks, Paul, I appreciate you writing back with perspective on more general support issues -- that helps understand the situation.

    I think the PCIe 3.0 x16 slot is pretty versatile and while there isn't a huge range of half-height half-length low-profile GPUs available, there are some really interesting options. Many people may want a GPU for compute or local graphics with this versatile/exciting compact workstation platform.

    It's unfortunate Supermicro don't give it more attention. If I find out anything further I will post an update.

    Bryce, I've not run into this error personally, so admittedly I don't really know what might be going on here. My suspicion is that it's just another example of pushing beyond anything Supermicro really bothers testing for, see also Oree's comment here:
    https://TinkerTry.com/esxi-gpu-passthrough-update-for-xeon-d-superserver#comment-3833558088
    I have a challenging time working with Supermicro on items that are fully supported, and describing a problem second had to them is unlikely to lead to a favorable outcome, especially with how unresponsive they've been lately. One direct example was testing with Intel Optane P4800X, where they essentially had no interest in my bug about that it hung my ability to get into the BIOS (A9 error)
    https://TinkerTry.com/intel-optane-900p-should-be-great-for-home-lab-enthusiasts#nov-19-2017-update
    since they never claimed that device was compatible with Xeon D-1500. It was really designed for and tested with Intel's new Purley line of servers. My point to them was that people are going buy the consumer version anyway, the 900P, so any info or KB or workaround would be good. But it turns out the 900P had even worse problems for my VMware use-case, see:
    https://TinkerTry.com/intel-optane-900p-should-be-great-for-home-lab-enthusiasts
    My point with this tangent is that when it comes to video card quirks, I'm guessing, their resource allocation puts that even lower on the list than storage, given their server focus, and the single PCIe of HHHL size that these systems are typically deployed with..
    I therefore would suggest that you contact them directly to inquire
    https://www.supermicro.com/24Hour/24hour.cfm
    so at least they have a record of this issue, and perhaps can spare others with a FAQ someday, for folks that have your issue with the same or similar hardware. It certainly does seem odd that CSM would have an effect on VGA settings, but with Supermicro's removal of release notes
    https://TinkerTry.com/superserver-xeond-bios#bios-release-notes
    it makes it even harder for me to point people to fixes in various releases, given that information is no longer made public.

    Hi Paul, did you have any thoughts on the error "Error-Unrecoverable video controller failure. – Assertion" as I described above?

    Hi Paul,

    Just thought I would add a comment here about my experience with GPU in the Supermicro SuperServer 5028D-TN4T. I have been using a PNY NVIDIA Quadro P600 in my box running Linux. It works great when running without updates, but I do (or at least, did) have issues with the GPU -- especially when doing BIOS updates.

    Most notably when updating the BIOS (to 1.3 recently, but also earlier versions), the BIOS gets reset to defaults, So on reboot I re-enter my BIOS settings. Typically I set CSM to Disable, but to disable CSM often it prompts about changing a VGA setting first and rebooting, then coming back to disable CSM.

    When I have done the VGA change I end up with the system not booting at all! I found in the IPMI Error log a repeated message:

    "Error-Unrecoverable video controller failure. – Assertion"

    What made the problem really bad is that "Del" could not enter the BIOS to make any adjustments!

    Fortunately I found this Supermicro FAQ about this error:
    https://www.supermicro.com/support/faqs/faq.cfm?faq=23025

    Once I changed the motherboard jumper to disable the onboard VGA the system booted perfectly with the NVIDIA P600. (Next time I power on, I'll try enter the BIOS again!)

    Did you ever hear about this error? I wonder what causes it to happen (a GPU incompatibility?), and why it probably doesn't happen with other GPUs. It seems that after disabling CSM then the jumper can be put back and the system boots okay.

    At least for me, it appears there's some incompatibility with GPU, and CSM / VGA onboard/offboard BIOS settings. Can you think of anything of these features that could explain what I observe? Could we ask Supermicro about the error, or suggested/required BIOS settings to avoid this error when using external GPU (without disabling onboard VGA).

    Thanks! Would be great to be enlightened about this issues!
    Bryce

    That is odd, and unfortunate that you can't boot the guest wit it present. Seems a UEFI boot order thing? Also interesting that the UEFI setting in the BIOS helped you get further, which gives me ideas for further testing. Thank you so much for the feedback!

    I have Windows server 2016 as a guest and installed with Plex as media center. Just want to passthrough a card to do 4k decoding job.

    I've changed the SLOT7 PCI-E 3.0 下6 OPROM from UEFI to Legacy in BIOS. Now the card can be toggled passthrough successfully. However, when added to guest, guest won't boot...neither boot with bios nor UEFI works..

    See also this guide at amd.com
    https://www.amd.com/Documents/MxGPU-Setup-Guide-VMware.pdf
    impressive level of detail (despite being a bit outdated)

    Well, I doubt any GPU is really supported for X10SDV motherboard or system, see:
    http://www.supermicro.com/products/motherboard/Xeon/D/X10SDV-8C-TLN4F.cfm
    where the list of items Supermicro does is:
    Tested Memory List
    Tested HDD List
    Tested M.2 List
    Tested AOC List

    But my issue was just the yellow bang, and an inability to have the K1200 "seen" by the VM I was passing through to, which was Windows 10 in my case. I'm afraid NVIDIA would much rather you just go with their Grid products
    https://www.nvidia.com/en-us/design-visualization/solutions/virtualization/
    and from my interactions with their engineers, they honestly don't seem interested in competing in the low-end GPU market place, whereas AMD does (usually) allow passthrough, but even that can be problematic and hit-or-miss, as discovered with the AMD Radeon Pro WX 4100, which didn't work properly in passthrough mode either.

    Hi Paul, when you say K1200 fails...could your server be able to boot to ESXi with K1200 seated?
    Recently i was trying passthrough Nvidia P1000...set PCIE channel to 4x4x4x with on-board GPU enabled it was able to enter ESXi. The card can be recognized by ESXi but just cannot be "active" after toggled & reboot the host...When changed to offboard GPU. OMG... it got stuck on logo screen with "DXE-OOB Data Initialization 91" error... Does it mean P1000 is not supported on X10SDV-8C ?

    Thanks for everything for the clarifications :) sorry if I was very heavy to disturb always :)
    I want clarification before buying
    This computer serves for work and creativity as a graphic designer and editing.

    Overall, not really, casual gaming only I'd say (I'm more a productivity worker). Gamers would likely tend to go for higher GHz (and higher watt burns), with double width GPUs and bigger power supplies.

    The intel xeon D CPU is suitable for gaming?

    Thank you for your time.
    Excuse wrapped there things that do not understand them but slowly I understand :)

    Many Mini-ITX system cases would allow much bigger power supplies and full height video cards, and some even have 2 full height PCI slots with one just used for bigger GPU cards like the double-width models you mentioned. I just haven't tested any 3rd party cases myself, but many other folks have. Building yourself always take more work up front to get things right, consider https://pcpartpicker.com/ and asking around for success stories, see also https://tinkertry.com/xeon-d-landscape-2017

    I finally understood, the motherboard supports low profile video cards.
    Another question, but if I buy an ATX chassis, does the super micro motherboard come in?
    Thanks waiting for answers.

    SYS-5028D-TN4T-16C
    CPU Xeon D-1587
    Does this CPU have limitations on the video card I've listed?
    I know this costs more because it has 16C.

    See my reply here:
    https://www.youtube.com/watch?v=UiNJGQgIXdQ&lc=UgzPaNHBITo831G1fL54AaABAg
    (only one HHHL PCIe slot, and GPU must be below ~70 watt power draw that comes from the PCIe slot only)

    Hi!
    Will the video cards listed above as AMD or Nvidia Quadro work?
    But if I would use this mini computer for video editing with cpu intel xeon D ?.
    Apart from that I am well informed in the forum of intel, that to make video editing with these processors I have to use a video card.
    If I buy a GTX 1050 or 1070 Nvidia Quadro would also work for graphics?
    Thanks waiting for answers.

    I agree, that MSI card
    https://www.msi.com/Graphics-card/GeForce-GT-1030-2G-LP-OC.html
    is likely to work with just Windows just fine, as long as you’re not trying to pass it through to an ESXi VM. I just haven’t tried it myself, that’s all. Curious how things turn out.

    Is there any reason you can think of that it would not work? The MSI GT 1030 has an HDMI 2.0b port and a Displayport 1.4 port and uses 30 watts so it seems like it would work. It also does 4K. I know it limits me to two monitors but that meets my needs. I just want to make sure there is no reason it would not work.

    I admittedly have not, since I have the need for three displays, including DisplayPort, ideally for 4K someday, see:
    https://TinkerTry.com/superserverworkstation
    https://TinkerTry.com/locate-your-4k-pc-20-feet-away

    Just wondering if you have tried using an Nvidia GT1030. It seems that it would fit and offers comparable performance.

    I'm pretty sure the HD7750 draws more than 20W. I'm just uncertain if this measurement is accurate. I initially thought my PCIe slot was limiting to 8 lanes. My Windows VM is in Proxmox 5.0, and the video card is passthrough with KVM.
    Anyway, I may test it again with a bare metal windows install, to see if it has to do with the OS.

    Interesting how it shows 50% for fan speed, despite lack of fan speed controller (but I did put a NA-SRC10 3-Pin Low-Noise Adaptor inline) https://uploads.disquscdn.com/images/92d8b7a14a5f36ddbe72cae5d796ffa19e90d44abfc46c713a26a21d7629b6f9.png https://uploads.disquscdn.com/images/e40764ab55f8cc4743f671f55cae77219c5349fd1651d382e47f9ac9f0b7407f.png

    Hmm, not sure! Doesn't seem like it, based on http://TinkerTry.com/compare but I was taxing the GPU and CPU at once to crank up the watt burn.

    I'm currentlly doing VT-d passthrough of my HD7750 to my Win 10 Version 1709 VM, using ESXi 6.5U1EP04 as my hypervisor. Interesting that GPU-Z 2.4.0 is showing 20 W for my Board Power Limit, I wonder what the other GPUs that I tried would have said. Thank you for reading my article, wenlez, hope the screenshots help! https://uploads.disquscdn.com/images/cf98ec2dbebb58f2fc74f1171aca3f99a7e342ddcec8e830e8f5d0692934d3a2.png https://uploads.disquscdn.com/images/2ab97794ef8a2d15408ae578d7552c5d1b3778aeffd4b7ca069130cdc30aa95c.png https://uploads.disquscdn.com/images/57fe522e7e8635826b609b6d845b26bcdc7dc03bf69b157ef750636d4979f437.png https://uploads.disquscdn.com/images/d396c81da12282159a5dcf7275d2b9bbfa3bea6df54083b1fbcffa83ab0c30a1.png

    I don't know if this happens to any of you, but my X10SDV is limiting PCIe power to 43W. I am using a SFX 450W power supply, through the X10SDV's 24pin connector. My video card is Radeon RX560(75W TPD).

    https://uploads.disquscdn.com/images/63df2acfd3c82e72e6159efdef8d8300596eef34cde7584403ff56be0dc8def2.png



    THe video card doesn't have a 6pin power input. I power PCI passthrough this to a Win10 VM, under Proxmox 5.



    Are you able to find out if your HD7750 is limited to 43W as well?

    Hmm, hadn't thought to do that, not sure IPMI RPM changes would like it, but it might work. Of course, the system would run warmer. The CPU fan makes much of the noise, see also (failed) testing here:
    https:/TinkerTry.com/superserver-combined-cpu-and-m2-cooling-fan

    I'm curious, can you use a Low-Noise Adaptor in the Supermicro SuperServer SYS-5028D-TN4T to make the case fan quieter? I find the IPMI settings limited for (case) fan control and am looking for an option to turn the RPMs down a bit to make things quieter. Is this an option or are there any other options to achieve this?