What fits in any home virtualization lab, has 8 Xeon cores, 6 drives, 128 GB memory, and 3 4K outputs from a Windows 10 VM? Your new Supermicro SuperServer Workstation!

Posted by Paul Braren on Jul 15 2015 (updated on Mar 31 2018) in
  • ESXi
  • Virtualization
  • HomeServer
  • 115 Comments

    Imagine an efficient little server that can double as a workstation. I don't mean one or the other like that dual-boot nonsense. I mean both. A whole hog VMware vSphere 6.0 Datacenter, along with a fancy schmanzy Windows 10 workstation, with a dedicated keyboard and mouse. Simultaneously. After you OMG, hold off on the LOL, until you actually read the rest of this success story.

    Aug 19 2015 update - This pre-configured bundle is now available! Assembly, BIOS configuration, Windows 10 install procedure also now available here, demonstrated in a new video here.

    This quiet little (8" x 9" x 11") mini-tower can be stuffed with 6 drives total, including the latest-fastest-littlest NVMe drives, right in that included M.2 slot, or PCIe NVMe.

    Idles under 40 watts, but even with heavy loads, and all 4 3.5" hot-swap drive bays full, it only gets that up to 85 watts or so.

    I'm not done yet. Remember my article about how ESXi is usually run head-less?

    What if you could get a half-height PCIe graphics card in there, with 2 HDMI and a Mini DisplayPort right on that little backplate. Not just an anemic old card. A modern card with enough GPU grunt to handle 3 4K monitors, with audio. Yes, that's 25 million pixels total. At a cost of about 50 additional watts, with the 250 watt power supply still not even half "full," and those CPU, PSU, and chassis fans still not maxed.

    2177159678

    How can this be? Well, it just so happens the Supermicro SuperServer leverages Intel's pretty sweet recent innovation they call SoC (System on a Chip). It's where you jam a lot of components onto a 6.7" x 6.7" (15 cm x 15 cm) motherboard by incorporating a lot of the watt-burning stuff you usually add on later via PCI, such as 2 VMware-friendly Intel i350 1GbE ports, and 2 10GbE ports. Putting it right on the mobo increases efficiency.

    None of that legacy junk cluttering things up either, or wasting watts. Yes, time to leave IDE, serial, parallel, and audio ports and chips behind.

    First Intel engineered all this datacenter goodness right on that little mobo that could, then they apparently tuned the heck out of it all, ensuring the overall component package with the CPU permanently attached uses as few watts as possible. Yes, that includes that roughly $800 8-core Xeon D-1540 CPU. It's smart enough to sip power when idling, heading south of 1GHz when there's nothing much going on. As soon as some cores feel demand, they're able to instantly jump right up to 2.4 GHz of turbo.

    SMCI_X10SDV-TLN4F_Angled
    Click on the image for a much closer look.

    So, which company picked up on the promise of this nifty little pre-assembled motherboard/CPU combo? That's 20 year server veteran Supermicro. After years of the home virtualization scene dominated by a variety of white boxes, Mac Minis, Intel NUCs, and various other 16 GB or 32 GB memory maximum systems, Supermicro serves this little guy up that's just dripping with awesome sauce, making it SOOO much easier for me to pick what I'd recommend for your virtualization lab. Honestly, I've been waiting for this opportunity for over 4 years, for the chance to finally have a new server I can recommend.

    The Supermicro SuperServer 5028D-TN4T arrives pre-assembled in a lovely little chassis, usually directly from Supermicro in San Jose, CA. All you need to bring to the party is some new DDR memory and disks, it's all available here, for example.

    Thanks for the memories.

    benchtesting.JPG
    Bench testing, looking at watt burn with Kill A Watt EZ.

    Why is it I keep mentioning virtualization? Well, what do home virtualization lab enthusiasts like myself tend to run out of first, when running many useful VMs 24x7 for a few years? Memory!

    You remember the old days, when you bought DIMMs, then tossed them aside for newer bigger DIMMs a few years later? Forget that. How about this time, you invest in your IT career, and your home lab, by getting 2 modern groovy ECC DIMMs now, weighing in at a hefty 32GB each. Yep, you heard right, that's 32 friggin' GBs each, for a total of 64 GB today. Oh, at a modest 1.2 volts.

    Wait, there's more. You can still get 2 more 32GB DIMMs later on, for a total of sweet glorious virtualization-friendly 128GB of RAM, in a home server? Holy crap!

    Ok, you're wondering about the price right around now, reserving your excitement, wanting to get the bad news about price. Luckily, it's not as tough to stomach as you might think, especially if you're at all serious about your virtualization needs these next 3-5 years. Hang on just a little longer.

    A serious workstation too.

    Here's the kicker. Why the heck am I talking about adding a video card to your home virtualization server? Well, perhaps you simply can't afford not to. In other words, you'd like to also be able to use this system as a workstation. That's exactly what I set out to do recently, to replace my suffering triple-monitor-attached laptop and a rats nest of cables with one sweet little box. I've done it, and it all works! Even better, the installation of

    • Windows 10 (GA coming by July 29th!), as I described here
    • VMware ESXi 6.0
      was all rather straight forward, either on the bare-metal, or in VMs.
    iKVM-showing-ESXi-6

    Hallelujah, it's about time! Thank you Intel. I know you had IT providers, datacenters, and small business mind with this little guy, but wow does this little server rock, for use in the home's of the people that work at those companies. And wow, what a performant NAS this could also make.

    Thank you Supermicro, for being first to step-up and put this into a chassis that fits right at home. Extra bonus that Supermicro already happens to be the step-up darling for home labs (see Serve The Home), with wonderful Remote Control over IP (iKVM) that rivals HP iLO.

    VisionTek-card-installed.JPG

    The Visiontek Radeon 7750 900686 is available at Wiredzone, Amazon, or Newegg.

    Thank you VisionTek, for that nice little video card that (barely) fits. Windows 10 auto-installs your AMD drivers, and you're good to go, with me testing it on my 2560x1440 DisplayPort Nixeus 27D monitor, and future proof for some better panels down the line.

    There's more good news, if you're willing to get creative...

    What about that day you wish to take your ESXi off-line for whatever reason, presumably after vMotioning your VMs to another ESXi host (server) somewhere. Now what? How do you get work done on your beloved SuperServer?

    Guess what. That SSD you have in this system can be booted exactly as is, not only as a VM. Yes, it boots both ways, the same Windows 10 instance, the same data on there.

    • Boot Windows 10 up (with no ESXi 6.0 running)? Sure. A workstation that just works, as a workstation.

    • Same Windows 10 booted as a VM? Yep, you can do that too, just like VMware Fusion does on a dual boot Mac.
    TinkerTry-pic-of-5028D-low-angle

    Yeah, that just happened, you read that correctly. This little bonus perk takes just a bit more work, and 2 lines to type into your ESXi via PuTTY (command line) to get the RDM (Raw Device Mapping) configured, but don't worry, I'll document how. I've already got a pretty big hint right over here. Yes, a second drive that your VMs live on is needed, just for the VM config files such as the VMX file, alongside that tiny magical pointer file that tells the VM how to find your actual Windows 10 workstation SSD.

    Heck, you could even slap that SAME 2.5" SSD into a laptop and take it on the road if you really wanted too, then pop it back in upon your return. Oh my, this just keeps getting better.

    What's the catch of dual booting a single SSD this way? Licensing might give you some issues, I'll just have to wait-and-see, once Windows 10 actually arrives.

    I'm so glad I've finally been able to turn one of my old articles into a moot point, thanks to Intel, Supermicro, and a bit of tinkering. Which article? Little secret those new to virtualization often miss - ESXi 6.0 continues to be mostly headless, just as it was for all prior VMware hypervisor releases.

    Nov 02 2015 Update - VMware support for 10GbE has arrived!
    Optionally, since you cannot see the 10GbE interfaces natively from ESXi 6.0 quite yet, you can use VT-d, aka DirectPath I/O, to passthrough those 2 10GbE interfaces to your Windows 10 VM, if you want. Yep, that just happened, it works, and works well. Here's the names Device Manager comes up with:

    My-two-AMD-PCI-Devices-DirectPathIO

    Ethernet0 vmxnet3 Ethernet Adapter
    pciPassthru0 Intel X552/X557-AT 10GBASE-T
    Ethernet 3 Intel X552/X557-AT 10GBASE-T #2

    Notice it even simply calls it pciPassthru0, interesting!

    I wound up sticking with a vmxnet3 NIC type when running as a VM, and a normal I-350 driver loading for when I'm booted natively to Windows 10. That means I only had to pass the GPU through, so that Windows 10 could automatically download the AMD drivers.

    Imagine once 10GbE does arrive, the power of a 3 node cluster for some amazing 10GbE vSAN speeds here, without the complexity of Infiniband.

    Some minor gotchas to be aware of, for those using Windows 10 as a VM.

    • The keyboard and mouse pass-through to the VM was straight-forward, once I found that the SIIG US over IP 1-Port works fine, coupled with any 4 port USB 2.0 hub you might have. I don't notice latency/lag, admittedly, I don't plan on much gaming either.

    • Initial set up of this server will require another PC temporarily, but once you set this Windows 10 VM to autostart with ESXi 6.0, you're good to go.
    thumbnail
    • BIOS - you can use another PC's browser, or even the mobile Supermicro IPMI app, to use iKVM (remote console) for BIOS access or ESXi re-install.

    • With this SoC (System on a Chip) design, it's not possible to take just the USB 3.0 controller and pass it through to the VM, because it passes everything else too (SATA, USB 2.0, etc.), see schematic here.

    • You'll want to have good backups of everything, to another system or NAS, but that's true for any systems you build.
    EricS-testimonial
    Thank you so very much Eric, and I sure look forward to testing a Veeam Repository as a backup target soon!
    • The fan in on the added GPU card is a bit louder than all the rest, even at idle. This keeps the GPU heatsink cool to the touch, even after long benchmarks with the chassis cover on. If noise is a concern, I have found and tested a way to get this little beast about 20' from where you've got your monitors. That's right, you can situate it in another room, yet still be able to power cycle, access USB 3.0 devices at full speed, and more. I will also be testing the effectiveness of the Computer & XBox Noise Reduction Kit - Dynamat Xtreme 40401.

    Aug 15 2015 Update - The Dynamat works very well, see comment below a related article here.

    Detailed build procedure:

    Aug 19 Update - Assembly and Windows install procedures are now available here. I don't have an ESXi 6.0.0b upgrade procedure documented yet, meanwhile, here's the gist, likely to eventually be published as a second article.

    1. assemble, attaching power, IMPI port, and at least ethernet port (bottom left)
    2. install VisionTek 7750 3x4K graphics card, available from Wiredzone, Amazon, or Newegg.
    3. install SATA DOM for ESXi 6.0.0b if you'd like, or just put it on a USB flash drive, which I prefer (leaving the valuable SATA port for bigger storage options)
    4. power up
    5. find the recent DHCP leased to IPMI
    6. use another PC to remotely access that IP over Web UI
    7. start iKVM
    8. install Windows 10 on local SSD
    9. before Windows Update can auto-upgrade GPU drivers, download and install AMD Catalyst Driver for Windows 10 64-Bit 15.7.1
    10. mount ESXi 6.0 Hypervisor ISO
    11. use F11 to boot from that ISO
    12. install/configure ESXi
    13. configure pass-through for the only two AMD devices seen, reboot ESXi
    14. create a datastore
    15. create the RDM mapping to that SSD that has Windows 10 on it
    16. create a Windows 10 VM, using the RDM drive for boot device, and VMXNET3 for network
    17. add pciHole.start = "2048" to the end of the VM's .VMX file
    18. install VMware Tools, reboot VM
    19. attach 13 port Anker USB 3.0 hub
    20. add USB 3.0 devices you add to that hub, and map those to your VM, as your needs dictate (keyboard and mouse won't be available yet)
    21. RDP to that VM
    22. disable the VMware brand Display adapter (in Device Manager)
    23. add Silex USB over IP 1-Port DS600 device to your network
    24. add USB Server to your Windows 10 VM, right-click mouse and keyboard to auto reconnect after reboot
    25. make a service out of this USB Server, so that your mouse and keyboard connect even if you reboot and aren't logged in yet, procedure here
    Panoramic-image-of-paul-braren-displays-jul-2015

    I'm headed on a road trip to VTUG Maine this week, so I'll need to save the rest of the details of this super fun project for another day, including video of the installation of the video card, for example. And note, I "only" have about 7.8 million pixels currently pumped to my 3 monitors (2 1920x1080, and 1 2560x1440), but good to know there's room to grow ;-)

    Stay tuned!


    What it'll cost ya.

    sys-5028d-tn4t_open-cropped
    Wiredzone, Supermicro Authorized Reseller

    The Server

    Now for the pricing...

    [AUG 03 2015 update - a custom, pre-configured bundle is now available! Details below.]

    Supermicro SuperServer 5028D-TN4T
    at Wiredzone

    • About $1200 USD for the bare bones system with 3 year warranty, just add memory and drives.

    • About $1828 USD if you want Wiredzone to include 2 of the only Supermicro-approved Samsung 32GB DDR4 DIMMs, with room for 2 more.

    Not available on Amazon or Newegg, I got my system (CPU/mobo/power/mini-tower pre-assembled) at Wiredzone for the reasons outlined here. If you appreciate the information and videos you've found here at TinkerTry, and you decide to buy, please consider using the above link.

    I've pulled together an Amazon shopping cart, see also more about the accessories over here.

    The Add-on Parts

    The Video

    Assembly, BIOS config, Windows 10 install procedure.
    Close look at the Supermicro SuperServer 5028D-TN4T vSphere Datacenter/Workstation Hybrid.


    AUG 03 2015 Update

    New TinkerTry bundled server/workstation now available!

    So here we go, now that you're a fully informed reader, ordering this very special bundle means that what I have will be identical to what you have, ensuring:

    • your experience will much more closely match this blogger's first-hand experience
    • our ability to share tips and tricks for years to come.

    This is a SuperServer/Workstation combination that I can very comfortably recommend, with a price point that's a lot more realistic than buying from the bigger companies generally more focused on 24x7 support than energy savings. This box fits the home server bill very nicely.


    Nov 01 2015 Update

    New TinkerTry Bundles now available, all listed here:

    This is partially a response to this admission, about the leading-edge Bundle 1:

    This system is ready for self-install of VMware vSphere 6.0 with (future) TinkerTry article guidance, running your same Windows 10 SSD as a VM. If you want your ESXi and your Windows 10 SSD running concurrently, VMware ESXi hypervisor passthrough skills are required, along with a separate USB-over-IP switch, and a willingness to occasionally use a second system for initial configuration/administration. I realize a leading-edge combined SuperServer Workstation (that I use extensively) is not for everybody. I suspect many proud owners are dual booted theirs.

    Alternatively, the Bundle 1 can be used as a pure Windows 10 workstation with 3 4K video outputs, of course.

    Read all about the Digi that makes this all work here:


    May 07 2017 Update

    • On VMware ESXi 6.5.0d, attempts to pass through the AMD 100-506008 Radeon Pro WX 4100 4GB Workstation Graphics Card GPU card were a modest success on BIOS 1.1c, working about 50% of VM boots, with no edits to the VMX file required. I then noticed the passthrough broke after the move to BIOS 1.2, so I re-specified the hardware to passthrough by ESXi, rebooted, then pinned those 2 AMD devices to the Windows 10 Creators Update VM. But every boot now fails to see the GPU at all. The VisionTek 7750 included in Bundle 1 SuperServer Workstation continues to work with BIOS 1.2, as it had in BIOS 1.1.

    Disclosure: TinkerTry makes a modest commission on each Wiredzone sale only if you use one of the affiliate links found at TinkerTry. Wiredzone is an authorized reseller that charges very competitive prices. Please consider sharing the URL TinkerTry.com/superservers. This source of web site funding goes directly into delivering more value to enthusiastic fans, and it sure beats complete dependency on advertisements. No sponsored posts, and all relationships with any vendors disclosed. All hardware and software was purchased, and any rare exceptions (loaners) are clearly mentioned. I'm a very discerning buyer who is relieved to finally have a highly-upgradeable virtualization server that is also widely available. The basis of years of fun and interesting articles to come. This common platform helps us to help each other more effectively, reaping maximum benefit from such a significant mutual hardware investment.


    Mar 31 2018 Update

    Here's AMD's very detailed, albeit Windows 7-vintage write up on doing passthrough, but these cards don't actually have video outputs, it's instead intended for Horizon View PCoIP applications/thin clients:

    MxGPU-Setup-Guide-VMware
    • GPU Setup Guide with VMware®

      2.1 Hardware Requirements
      2.1.1 Host/Server
      Graphics Adapter: AMD FirePro™ S7100X, S7150, S7150x2 for MxGPU and/or
      passthrough
      ***note that the AMD FirePro™ S7000, S9000 and S9050 can be used for passthrough
      only
      Sample of Certified Server Platforms:
       Dell PowerEdge R730 Server
       HPE ProLiant DL380 Gen9 Server
       SuperMicro 1028GQ-TR Server
      Additional Hardware Requirements:
       CPU: 2x4 and up
       System memory: 32GB & up to 1TB; more guest VMs require more system memory
       Hard disk: 500G & up; more guest VMs require more HDD space
       Network adapter: 1000M & up


    See also at TinkerTry

    B01BGTG41W

    See also

    sth-review-screenshot

    BOTTOM LINE:
    With four port Ethernet (two 10Gbase-T and two 1Gbase-T), solid storage m.2 PCIe x4 and 6x SATA III, a fast and low power CPU (Intel Xeon D-1540) and 128GB of RAM, the Supermicro X10SDV-TLN4F is a must get platform. For those still using Intel Xeon L5520 or L5620 generation processors, one can get more performance in less than half of the power and space footprint which is astounding. For those that always wanted more than the Xeon E3 line could offer in terms of their limited RAM capacity (practical 32GB limit) and core count (4C/ 8T max), this is the answer.


    All Comments on This Article (115)

    Got Rufus to work fine, ESXi 7.0U1 video now here:
    https://youtu.be/V8LMa8vb2ng

    No worries, just glad it worked out, and glad I know more about what has happened for the next time somebody asks. Enjoy your system, hope you're having considerably more fun today! ESXi 7.0U1 installs great, but you'll want to set these BIOS settings:
    https://TinkerTry.com/recommended-bios-settings-supermicro-superserver-sys-5028d-tn4t
    and I found that using iKVM with Java (I know, yuck) worked great for a fresh install of ESXi 7.0U1, whereas I couldn't get Rufus
    https://TinkerTry.com/rufus-takes-2-minutes-to-create-a-bootable-usb-flash-drive-for-esxi-installation
    to work at all.

    Well this is embarrassing, I felt I'd tried all combinations but in going through the list again before calling, I tried username ADMIN (all caps) and the pword from the sticker, also all caps, and I'm in! I'd have sworn I tried that yesterday but I was swearing a lot yesterday. THANK YOU so much for your help and how quickly you have responded! PS- I have firmware version 03.88 (build time 02/21/2020) and BIOS 2.1 (from 11/22/2019)

    See also more about default password changes here, from May 2020:
    https://www.servethehome.com/why-your-favorite-default-passwords-are-changing-supermicro/

    Hey Phil! I haven't heard about this initial password issue on Bundles yet, but they did let me know about some changes in shipping just through October as they deal with some warehouse issues. Thank you so much for providing details, it helps everybody a LOT. I'd like for you to get you the fastest service possible, please contact Wiredzone directly in case they've heard about this https://www.wiredzone.com/contactus (I also sent them an email heads-up), and even better, reach out to Supermicro directly at https://www.supermicro.com/en/support/24hour who can definitely give you guidance on your IPMI lockout issue, and be sure to include better instructions for future buyers. That feedback directly from you is far more compelling to them than second-hand info from me, just a blogger, and my day job keeps me pretty busy during business hours.

    Hi, I received my SuperServer 8 core bundle yesterday. I ordered it on 10/5 and it arrived 10/15. They let me know it would take a few days longer for the custom build (I ordered the standard bundle) and the testing. WiredZone suggested it would ship out on 10/16 but it has already arrived! A little oddity. It came directly from SuperMicro, not from WiredZone, it does seem to have all the bits. No info on the burn in but assume it took place. Note: I live in San Jose, CA and SuperMicro is also in Sans Jose, so that may explain things.

    Now my question. How to login to the BMC? The single sheet of paper that came with the order had info about them changing from the username/password for BMC being ADMIN/ADMIN to it being ADMIN and then a 10 Alpha pword included on a sticker in a couple of possible places. I found a sticker on the back of the case in the format they suggested. However, it doesn't work to allow login. (nor did the original ADMIN/ADMIN work)

    The sheet pointed to a URL to look at if I had problems. That URL has a link to two program for reseting BMC pwords. One is for setting pwords of many systems at a time from the original ADMIN pword. The other python code says it will reset the password to either ADMIN or a pword you include in a one line txt file. (The wording on the page it a touch ambivalent.)

    I used the second program, made up the one line file and tried it out. The program output suggested success but I'm still not able to login using ADMIN, the code on the sticker or the password I put in the one line file.

    Would you have any suggestions for how to get past this issue? Many thanks and I'm very pleased with the system. Sort of a 'Shuttle' system for big boys but at the same price point.

    Wow Joshua, this is such a kind comment, I really appreciate it. Enabling others to have at least one way to enjoy "initial success" is exactly what gives me joy, and makes writing technical stuff so fun. Thank you!

    Joshua Bradshaw

    Thanks, it seems I'm always trying to find a lull in work to make this switch (the Super Server is the backbone of my home office/lab). Simply running VMware Workstation on it to serve VMs works pretty well but I've been wanting to move to ESXi both for the improved performance and simply to build my VMware experience and knowledge.

    A big part of the reason I went with the Super Server was the obvious level of support you provide to your readers and I'm very much the "follow the instructions" kind of learner. Do it the way someone else did so you have an initial success. Then tinker with it to see what else you can do with it, break it, fix it, and so on. There's only so much you can learn working with ESXi when it's a client's network and mistakes can't be made - you only ever do things the one way you know works, and I've always walked in to virtual environments where everything has been set up a certain way but the how and why aren't entirely clear. You do your thing, make sure you don't blow the network up, and leave.

    Thanks for everything you do.

    Darn, my reply seems to be missing, did you see it a couple of days ago? This is is that yes, you can do NVMe passthrough
    https://TinkerTry.com/how-to-configure-vmdirectpath-pass-through-of-nvme-on-esxi-6-5-update-1
    and yes, the Visiontek works well for pass through, even on 6.5 Update 1, see:
    https://TinkerTry.com/esxi-gpu-passthrough-update-for-xeon-d-superserver
    https://TinkerTry.com/esxi-is-designed-to-be-headless
    You don't have to pass NVMe through if you'd rather just format the NVMe then create a new VM on it, restoring a bare-metal backup of your Windows 10 to this VM.

    Joshua Bradshaw

    Your video link doesn't want to work for me, but I pulled the video ID number out of it and accessed it that way.

    What I'm going to do over the next few days is wean myself off this Windows 10 workstation (Super Server booting directly from SSD) and start using a different computer for my daily work (so that I'm not dependent on getting this back running the way I like right away).

    If I understand correctly, I need to pass through the SSD and the video card to an empty VM I create on ESXi, with the SSD set as the boot hard drive. I have my USB over IP sorted out and already working so there should be no issue running it right away. Am I reading this correctly that the VisionTek card and SATA SSD will work via NVMe passthrough, or do I need RDM mappings?

    With NVMe passthrough getting so easy, not needing RDM mappings
    https://tinkertry.com/how-to-configure-vmdirectpath-pass-through-of-nvme-on-esxi-6-5-update-1
    I admit that it seems I never got around to publishing a detailed guide. I do have a video that may be helpful though, hope you can let us know how it goes:
    https://www.youtube.com/edit?o=U&video_id=OmWJjCxeHVs

    Joshua Bradshaw

    So you said, "This little bonus perk takes just a bit more work, and 2 lines to type into your ESXi via PuTTY (command line) to get the RDM (Raw Device Mapping) configured, but don't worry, I'll document how."

    I'm having trouble finding where that is, help?

    I live in the past (ESXi 6 and Win 7) where things seem to work OK. Sorry I am no help.

    Darn, I never got passthrough working reliably on ESXi 6.5.0a with Windows 10 in the VM, and it got worse with ESXi 6.5.0d.

    OK so I brought a WX4100 which got delivered 2 days ago. I've put that into my SuperServer which is running Proxmox and using GPU passthrough on that it so far has worked perfectly. 2D and 3D works without a hitch and the server hasn't crashed on me or the VM once. I'm on the 1.1c BIOS which I think is the latest.

    I've also tried this with the BIOS and UEFI ROM's on Proxmox with Windows 10 and again it works fine on both.

    So cannot say 100% you'll have no issues but compared to my experiences with the 7700 this card works properly. I think it's just an age thing as the ROM on this card is likely passthrough aware and the older cards were before they even had to consider that as an option.

    One thing I found extremely hard about this card is there's just so little information on it but I can conform it does have sound hardware on it and will play out via HDMI if you've got a screen with speakers or a speaker output. One thing to bear in mind if you want sound you can avoid having a separate sound card.

    Also a couple of plus points for Proxmox but it allows you to pass individual USB devices so you can pass a keyboard and mouse though off a hub if you want but use the rest of the hub to still plug devices direct into the server.

    The other good point is that it disables the emulated graphics when you do passthrough so you even see Proxmox's logo as it posts on the external screens plus can see Windows booting, patching, etc which previously was never the case on VMware because you'd only see the screen once Windows had booted and the graphics card drivers had kicked in. Maybe this has changed since I last did it but thought I'd mention it.

    Ah that is great news and by testing lots of 3D that I think proves it's stable in areas I had issues with on the 7700. That card was fine for days on 2D but fire up anything 3D and it would crash within 15 minutes normally.

    If you look carefully on KVM in the logs I could see the BIOS on the card was causing issues.

    Mmm. Might well order one of these then and have a go. I work from home a lot so mostly this will save me firing up my 1080GTX box when the Superserver is on all the time anyway to save some electricity but more generate less heat in my study :)

    If this really works with ESXi 6.5.0a it will be a jump forward in performance from my AMD/VisionTek 7750 4K, thx Ryogi!

    Looks like passthrough with the wx 4100 works after all!
    https://manatails.net/blog/2017/03/radeon-pro-wx-4100-review/

    Still not quite clear how this was accomplished:
    https://www.virtuallifestyle.nl/2016/11/running-virtual-gaming-rig-using-xeon-d-server-gfx-750ti-pci-passthrough-windows-10-vm/
    (on 6.0, he's not on 6.5 yet, and I personally need USB 3.0 speeds)

    Admittedly, still in the box, because I'm using my 2 node cluster for 6.5.0a testing, and haven't turned one back into a SuperServer/Workstation recently, as my 2TB SSD now has a decent home:
    https://TinkerTry.com/booting-your-windows-10-from-an-external-drive-acts-as-windows-to-go
    That said, this still has limitations, and I do hope to get to test the WX4100 in the coming month or two.
    Sorry it's taking so long Ryogi.

    Hey Paul, first off, thanks for the great content.

    I was wondering if you ever had time to go back and take a look at pass-through with the WX4100. Thanks!

    yes exactly, the 7750 is too old and expensive seemingly, for what you get (but it would be functional and low power). a gtx 1050 ti would seem to be great here, if it fit in the 'superserver''s tiny case (but those 1050 cards all seem to be 2 slot). I can't find a low profile, single slot, low power, semi-recent vintage video card that can drive a 4k/60hz 4:4:4 chrome tv.

    True you could even run Windows 10 with Hyper-V if you've a single node then likely the missing Hyper-V features on client Windows aren't an issue. Saves dual booting.

    The only thing I'd mention is the 7750 is years old now so game performance isn't going to be very good.

    I'm still out here lurking, thinking about buying a system. I decided to get an m2 nvme instead of ssd, so 950 or 960 evo instead of the 850, 512gb size.

    But oh, the graphics card. I have only used vmware a little bit (i'm a software engineer), so that doesn't look appealing, too much to learn. I do want 4k/60hz. I have a vison of using a frontend OS with hypervisor built-in (hyper-v win 2016 or linux/kvm like proxmox). Too many choices. Since I don't really need to virtualize the graphics card, I would think I could find other alternatives, but the 7750 seems to thread the needle. I could dual boot windows if I ever played a real game :-)

    1) Sure, the GPU does have audio output, but no input. Just assign a USB audio input and output device to your "workstation" VM, such as the one I use http://fave.co/2kdvrpa

    2) Perhaps, but I never got this to work, and this guy has gotten a USB hub pass through:
    https://www.virtuallifestyle.nl/2016/11/running-virtual-gaming-rig-using-xeon-d-server-gfx-750ti-pci-passthrough-windows-10-vm/
    but not under 6.5. So for me, my only tested solution is still the Digi.
    https://tinkertry.com/digi-anywhereusb2-usb-connect-over-ip-to-vm

    I hope this helps!

    No problems at all. I didn't get into it expecting it to be easy or requiring it to work. Graphics card pass through is still reasonably a fringe case at least with cheaper cards.

    I've not been keeping up on this in fairness so didn know about the new WX4100 but that looks like an interesting card. It's a shame they are all pro level cards with the price to match however might do some research to see if anyone has even tried them. I'm sticking with Proxmox for the moment as I've too much on it now to easily shift to something else but I had the most success with Proxmox as well.

    If I try I'll let you know.

    Paul congratulations for your new job! I would like to ask you a few doubts:

    (1) I would like to use Skype on one of my SuperServer Workstation but the VisionTek 7750 does not have sound input (it only has output). Is there is any way to have sound input? Does SuperMicro have motherboards for Xeon D with 2 PCI-E slots (graphics card and sound card)?

    (2) Can we assign a mouse and a keyboard to a VM if we install a PCI-E USB card and assign it to that VM (passtrough) ?

    Thanks a lot for your help

    I am very interested in how a more recent video card works across the main hypervisors. At a basic level, I am interested in just driving a 4k/60hz chrome 4:4:4 monitor reliably. The builtin video on the superserver doesn't scale to 4k. Based on my web research, it seems like not many people are using a virtual gpu shared across hypervisors, which is a shame. So maybe the way to go to get reliability is run a card natively in an os (for me it would probably be some linux + hypervisor but maybe hyper-v standalone os), then it would be awesome if you could share the card with a vm os - but afaik, you can't really do that today, with a card also running in a native os (can you?). I should say I hardly know anything about vmware.

    Ah, sorry to hear this, glad you acknowledge the risk, given no guarantees on pass thru, but it's good to push the envelope and learn too. I really appreciate all the time you spent typing up your experience, to help others thinking about the same sort of tests.
    NVIDIA Quadro 1200 fits but doesn't do ESXi 6.0U2 VT-d passthrough properly, and the AMD Radeon PRO WX4100 (that I haven't unboxed yet) should fit, not sure if it will do proper VT-d with ESXi 6.5. Sorry, just haven't had the time to work on Server 2016 and Proxmox pass through tests.

    Hi,

    The long and short of it is I gave up in the end but my usecase might have been the issue.

    Initially when I got my server the BIOS didn't actually support all the features needed for Server 2016 to see the motherboard as passthrough capable but VMware / KVM were fine without those features. That was fixed around BIOS 1.1a if I remember right so 2016 also would work.

    I however had no luck at all passing the GPU through on Server 2016 but did get it passed through on KVM (Proxmox) however I had to update my ATI 7700 graphics cards BIOS to achieve this so it would work.

    It worked fine in 2D but anything 3D would cause the VM to crash within 10 - 15 minutes or so tops. The problem is that old graphics card wasn't designed for passthrough and the BIOS still caused problems. As there's a limit of half height cards and I didn't want to change the case I called it a day and just use it as a server now.

    I've no doubt the motherboard is fine. Just I think it probably needs to go into a bigger case so it can take a more modern graphics cards. Unless a better half height one has arrived recently of course.

    It's a great server though so I still highly recommend it.

    James, what did you ever get going with hyper-v? I'm also interested in trying a superserver with some video card, probably the one in this package, and want to try kvm or hyper-v. It's difficult to get any info about video cards working.

    I love this! So many ways to do things, and no one way is right for everybody. All the better to know the same exact hardware can be used in two completely different configurations. Thank you Patrick, I really appreciate what you've written up here. Great to hear you've been using it 9 months with no issues! I admit I went back to a laptop for the last few months, since I only very recently finally saved up enough for a 2nd SuperServer, so I can use one as a workstation/SuperServer, and the other as a SuperServer for creating fresh install install videos and general tinkering.

    Here's another approach to software RAID that I happened to come across today:
    http://serverfault.com/questions/407305/getting-vmware-esxi-5-0-0-with-raid-using-the-intel-x79-chipset-to-work
    when answering an Intel RST question.
    https://tinkertry.com/superserverworkstation#comment-3087864568

    Here's another approach to pass through of video and USB:
    https://www.virtuallifestyle.nl/2016/11/running-virtual-gaming-rig-using-xeon-d-server-gfx-750ti-pci-passthrough-windows-10-vm/

    Early on, when testing VT-d of different Xeon D components, I found the pass through of USB 2.0 and 3.0 controllers to be a bit odd, and wanted to keep those ports ready for ESXi itself on USB, and for mapping USB devices to any VM of my choosing, so I went a completely different route, as you've noted.

    Again, thank you, so glad you stopped by TinkerTry!

    Since Paul's blogs helped me immensely, I thought I would share what I did. At first I followed Paul's blogs until I better understood all the pieces. Then when I was comfortable with the flexibly this setup offers, I rebuilt everything with some modifications to fit my needs. Unfortunately I don't have a detailed guide for what I did with my setup. Paul puts us all to shame with his extremely detailed guides and videos :)

    I'm using the superserverworkstation gear. I run ESXi off my SSD drive (instead off a usb drive) and also use the SSD for the datastore for my main VMs (Win10, NAPP-IT NAS, pfSense firewall, Media erver VM, and Web server). I created a VM guest for the Win10 workstation with the GPU pass through. I just added the pass through config settings in the vmware client. I did not need to mess around with the vmx file for pcihole settings (after trial and error), but needed to disable my monitor from going to sleep or else it would never wake back up.
    1. Instead of RAID, I setup a software based NAS with 2 WD RED drives using NAPP-IT ZFS as a VM guest. This gives me redundancy writing the data to both drives. In the vmware client a raw mapping needs to be configured for each drive. It isn't RAID, but works for my needs for a NAS. I also have this as a second datastore for ESXi for test/lab VM that aren't used all the time.
    2. For the mouse/keyboard, I initially went the route of the RPi and VirtualHere since I had a RPi laying around. It worked out great and had zero issue with this is over the few months I used it. However, since I was using my Win10 VM as my workstation daily, I found I needed the USB ports more than I thought I would and I didn't need USB for my other VM guests. I ended up just passing through my USB ports to my Win10 VM. I thought this would not work / cause conflicts based on how the USB is integrated into the motherboard, but in 9 months I have no issue with this approach.

    Josh, got some responses for you

    #1
    as for RAID, I'm just the messenger, to this unfortunate limitation which has always been true, that Intel RSTe RAID doesn't work with ESXi, it's not performant enough and doesn't have ESXi drivers:
    https://TinkerTry.com/superservers#required-reading
    but that said, if you really require RAID that is on VMware's compatibility list (it's very enterprise focused):
    vmware.com/go/hcl
    you can (barely) fit something like the LSI 9265-8i in that PCI slot, then run the cabling to the 4 drops on the hot-swap drive bay backplane SATA connectors.

    The good news is that M.2 NVMe SSDs are so much faster and simpler to configure, and up to 5 of them can be fitted to this system:
    https://tinkertry.com/pcie-to-m2-nvme-accessories-overview
    but I realize that's not redundant.

    #2
    I admit I haven't researched this lately, as I have the simple Digi, but I believe folks have done well with your approach.

    #3
    Cool.

    See also an alternative 6.0U2 approach that includes passing a USB hub through to a VM, but hasn't been replicated on 6.5 yet:
    https://www.virtuallifestyle.nl/2016/11/running-virtual-gaming-rig-using-xeon-d-server-gfx-750ti-pci-passthrough-windows-10-vm/

    1. I was planning to simply use the RAID functions built into the motherboard/BIOS.
    2. I don't currently, but I'm planning to build one out of a Raspberry Pi.
    3. I have ESXi 6.5, yes.

    Glad you left a comment! I go a little long with the words myself ;-)
    No single article captured the SSD and GPU pass thru procedure, admittedly, super niche it turns out. Before really getting going (I'm travelling so reponses delayed), I need to ask:
    1) what kind of RAID controller are you planning to use?
    2) do you have a https://tinkertry.com/digi-anywhereusb2-usb-connect-over-ip-to-vm?
    3) what version of ESXi, is it 6.5?

    Going to try to keep this brief, despite natural inclinations to the contrary:

    Got a Bundle 1 Superserver. Goal is an ESXi hypervisor with vCenter and various other VMs, as well as the Win10 SSD running as a VM with keyboard, mouse, and video passed through.

    Stuck at the moment with a DOA drive meant for the hot-swap bays stopping me from building the hardware RAID meant for hosting all the VMs. I have ESXi running from the included USB drive.

    I'm not clear on how to get the SSD added and setup so that ESXi can run the Win 10 on it as a VM, autostart, and so on without wiping the drive. Maybe I missed the video tutorial but there are so many (a good thing) that it's sometimes tough to find a particular one. Can you point me in the right direction?

    Yes, I would go with the same AMD card, since I researched newer efficient small GPUs out there with 4K mini DP outputs, and the only new contender was the NVIDIA K1200, but it doesn't work right (I tested it anyway, just in case). NVIDIA prevents their lower end products from working properly with VT-d aka VMDirectPath.

    For more details on how the motherboard and CPU are important, along with a BIOS setting, see also https://TinkerTry.com/vmdirectpath

    It's not a simple topic, but hopefully I've got you pointed in the right direction!

    The AMD card is louder than I would prefer, but it can be quieted using a n inline fan speed reducer included in most Noctua fan purchases (software can't control fan RPMs on that particular card, and there's nothing else like i that has come out since, but admittedly, I've not done an exhaustive search in about 7 months). Here's VisionTek's current line-up: https://www.visiontek.com/graphics-cards/results,1-200.html?categorylayout=0&filter_product=

    After a little more research, I've realized vTD (which is essentially the passthrough of a GPU to a VM) is the limiting factor. VMware seems to have support for a few types of GPU sharing: vSGA (which is "shared graphics" and this is what the NVIDIA Grid would take advantage of); or vDGA (which is "direct graphics" and this seems similar to the gpu passthrough) and a few more I read about here: http://www.brianmadden.com/opinion/Clearing-up-the-confusion-around-VMware-Nvidias-vGPU-vDGA-DaaS-announcement

    I'd really like to access a VM remotely and benefit from a GPU as I do CAD drafting and I'm looking to explore the VMware possibilities. I know you don't see the improvement of a GPU unless you're connecting a monitor to the server, so I'm wondering if that means I would also need VMware Horizon to connect?

    Thanks, it does help alot as I either misunderstood or didn't have a enough info on gpu passthrough.

    In this instance, is the one GPU to one VM a limitation of the GPU, VMware, or the Motherboard? Ideally, I was hoping to split it onto 2 VMs but this was more of a future possibility not a necessity at this time.

    I really appreciate the info, but one last question: if you did the build again, would you go with the same AMD card? or would you look for other possibilities (assuming this is for the VM, and you dont need 3 - 4K monitors hooked up to your server)?

    The basics of passing a $250 GPU through to a VM are outlined here:
    https://TinkerTry.com/superserverworkstation#detailed-build-procedure
    but that’s an AMD card that is compatible with the VT-d pass through feature of ESXi 6.0. NVIDIA doesn’t support VT-d, with strange things happening when I try anyway, attempting to make the lovely little K1200 cooperate, but they want you to buy their GRID products for the proper VMware experience and support.

    When using GPU passthrough, it also means you assign or pin that entire GPU to just one VM. Also, you can no longer vMotion.

    It is more elegant, but much more expensive, to do the NVIDIA Grid thing, where you share a >$2000 CPU card that is intended to be carved up as many vGPUs that can be assigned to many VMs, where you then turn on accelerated 3D graphics. I actually talked to NVIDIA at length about this whole matter at VMworld 2016 US actually, but didn't manage to get permission to record any of it for my video interview article at https://TinkerTry.com/vmworld-2016-interviews

    I hope this helps a little bit.

    Hi Paul,
    I've watched many of your videos and read some of your articles and I have a quick question for you (I feel like you're an authority on all things vmware and supermicro).

    I recently purchased a X10SDV-8C and a 1U chassis SC504-203B mainly to run a few linux and windows VMs (still waiting on some parts to arrive). I may be going beyond the scope of the capabilities of this sever, but here's the question:

    I wanted to know if it was possible to install a graphics card in the pci-e slot (with a riser card so it fits in the case) and have it used by the VMs? I read some great things about the NVIDIA Grid K1 but it doesn't have to be that card, I've never done this before and I'm just exploring some possibilities.

    Thanks!

    Hello. I have been reading this article, so I hope others are still reading this too.
    I recently purchased the Superserver, and I am trying to set it up with a Windows workstation, a la Paul Braren. Therefore I have been following the steps listed above. However when it comes to "22. disable the VMware brand Display adapter (in Device Manager)", I am having trouble. I completed that step, but now I have the windows 7 "Standard VGA Graphics Adapter" automatically installed along with the "AMD Radeon HD 7700 Series". Regardless of disable or uninstalling the windows Standard VGA Graphics Adapter, it returns on the next reboot. The only way I can interact with the VM is though the VSphere console screen. On a couple of tinkering reboots (reinstall Radeon driver, modifying the pciHole, etc.), I have successfully used the passed-though graphics card, so I know that it can work. Any ideas what is happening and how to allow for the Radoen to remain the default video adapter between rebooting the windows workstation VM?

    I've ordered a 128GB SuperServer for myself from WiredZone (Used your link as with all this content you deserve a kick back).

    I noticed that Windows Server 2016 Hyper-V now supports passthrough and they've even got a script to report what devices might or might not work. I was wondering if you ever get a chance if you could look into this?

    https://blogs.technet.microsoft.com/virtualization/2015/11/20/discrete-device-assignment-machines-and-devices/

    Interested in not just the GPU but can you do USB controllers.

    Hope you're still happy with your SuperServer investment.

    Well, that is an unexpected resolution, so glad to hear it. I vaguely recall, last summer when first testing this, that it seemed pci.hole didn't seem to be necessary always, during numerous tests (and rebuild/retests) that I had done. I'm currently playing with possible newer/quieter alternative GPUs:
    https://tinkertry.com/superserver-combined-cpu-and-m2-cooling-fan#mar-11-2016-update
    but so far, not working perfectly, so the VisionTek 7750 is still the champ, and I found a way to reduce its fan noise a bit (basically adding a Noctua CPU fan reducer cable).

    Patrick, your multi-month persistence and dedication to resolving this is VERY much appreciated, and something I will be first-hand testing, once the new IPMI is out to go with the new BIOS 1.1 and ESXi 6.0U2, just to be sure all is still square.

    I justed wanted to give an update. After added the additional pci.hole statements to the vmx file, the issue popped up again. however, after letting things run for a couple months, I finally found my issue. It turns out it was all due to the Windows screen sleep setting. I set the screen to "never" power off and now I haven't have the problem occur over the last couple months. Turns out I didn't need the added pci.hole settings after all. I'm not sure why having windows put the screen to sleep randomly causes the video to seemingly to never power back up and not visable to esx anymore. I am using using a HDMI to DVI adapter, but the issue still shouldn't happen. I haven't been able to test with a native HDMI/DisplayPort screen, but I'm happy this system is now rock solid now.

    So happy to see this good outcome, odd that you had to have 3 pci lines and I didn't, but just so glad it's working. FYI, I have 2 1920x1080 monitors using HDMI to HDMI cabling, and 1 2560x1440 monitor using a mini DisplayPort to DisplayPort cable, from my VisionTek GPU.

    I finally had a chance to do more testing, so far I've been able to keep the GPU passthrough working 7 days and counting. It is tough because any change I make, I needed to wait 3 days before I knew if it worked.
    I rebuilt my Windows 10 VM with the following settings:

    HW Version 11
    BIOS
    Reserved all guest memory (all locked)
    .vmx file:
    pciHole.Start = "2048"
    pciPassthru0.msiEnabled = "FALSE"
    pciPassthru1.msiEnabled = "FALSE"

    Changed monitor to turn off after 30mins instead of default Windows setting.

    I am still using the free esxi license, but if I do run into problems, I think I'll try removing it and testing on the 30 day default install license. Strangely enough, using just one of the above settings doesn't seem to help. All of them together is when it started working. I can't tell you why it works, I pieced together different things from different forums and blogs. Crossing fingers now hoping things will be rock solid now.

    I also dug out my RasberryPI and using VirtualHere for my USB Keyboard/Mouse connectivity over IP. Working great.

    I'm not sure I can justify the $200 at the moment. Though, you have given me some ideas for testing if my issue is related to the free version. I should be able to use all the features for 30 days if I don't license esxi or remove the free license. At least I think so. I'll have some more time time this week to try that out. Also, I'm going to try to document my settings a little better. Maybe I'll catch something I missed.

    I'm using version 11.

    Another money saving option to get licensed access that might work for some people in school is to.check to see if your school particpates in VMware's VMAP or vITA programs.
    GP

    Paul thanks for the quick response and encouragement. I almost felt like you were putting more work into than I was:) I teach full time in a cyber security program for adults and HS students and while I am out two weeks for the holidays I have to multitask since between myself and my wife I have a couple dozen projects going.

    I was mapping to the device name t10.ATA_____Samsung_SSD_850_EVO_250GB_______________XXXXXX_____
    rather than to the VML identifier for the disk .

    You ask if I was using the standard client or the web version. I was using the older windows version since that was what I am use to. Hopefully now I will get up to speed on the web client.

    I am up and going with the RDM disk and pass thru. I have the 20' extension set you recommended so now I need to look at the USB pass through in the web client you mentioned for my C920, and sound. The USP over IP with virtualhere works great for the keyboard and mouse but stutters for higher bandwidth stuff.

    Thanks for the great idea (superserver) and you informative blog. Plus cluing me in on RDM. I had seen it done both ways in articles online, but after 35+ doing this you would have though I would have caught it.

    Again thanks,
    GP

    Would you consider the $200 investment in EVALExperience, so you'd have full VCSA for a year? https://TinkerTry.com/evalexperience

    I'm wondering if the vSphere Client (versus vSphere Web Client) is tripping us up a bit too, since not all features are supported from it. Are you using VM version 11 for the VM you created?

    I have also added warnings/caveats to the order page, so potential buyer are better alerted to the possible challenges of an all-in-one Bundle 1 configuration, with self-conversion to a datacenter. It's not really a beginner level project, admittedly. Bundle 2 or 3 (no OS, ESXi ready) is more suited for more people, with a super simple install of ESXi.

    Alright, I had this working in the prior build too, way back in August, sorry that was a dead-end. I'm confident we'll figure out what our differences are, soon after I get myself booted as a VM again.


    Right now, my 2TB Samsung 850 EVO SSD is in my laptop, as I'm finishign up a multi terabyte data migration. Within a few days, I should be able to go back to creating a VM to boot this SSD in that VM again, and will document (and record video of) each and every step, so we can compare and contrast. I appreciate your hurculean efforts to be on the bleeding edge of what's possible with pass through with me, and hope you can stick it out just a little longer.


    There is the possibility that the VM being EFI is making this all tougher than it should be (for consistent shutdown, power up of VM, see Robert's comments above), but the long-term goal there was easy upgrade past 2TB someday without requiring major OS tweaks (since GPT allows that).

    I am using the vSphere Client since I'm using the free license. I have the 'pciHole.Start = "2048"' defined in the vmx file, though, the passthrough seems to work with or without (at least for the few days). My setup is a little different since I am running the Windows10 VM without installing directly on the local drive and using RDM mapping. I just created a new VM and installed Windows 10 from DVD within the guest VM. Maybe this is my problem. I may trying going back an rebuilding the esxi host and VM exactly as you did.

    Are you using the vSphere Client or the vSphere Web Client, to configure pass through and that Windows 10 VM? Not saying it's relevant, just trying to figure out what might be different about our environments...

    Glad to see you remembered to set the VM Options, Boot Options to EFI. And super interesting that you got thte Raspberry Pi working for keyboard/mouse attach to VM, at a much lower price point I'm sure. I too use a C920 camera, but I simply use the USB mapping feature of vSpher Web Client and my long USB 3.0 cable to the hub https://tinkertry.com/locate-your-4k-pc-20-feet-away, I realize that likely won't suffice for you, since you can be located much further over IP.

    Is there a chance there's a typo or mistake in the RDM mapping command you issued? That would be my guess, as far as the most likely reason you can't boot. I know that's the spot where I went wrong on numerous occasions. I also occasionally couldn't figure out what was wrong with a VM, so I'd just start over from scratch, and create the VM again, and it'd suddenly work just fine. Once up, best to simply reboot it, and avoid shutdown where possible, since that's when editing the VM seems to be needed to get it bootable again.

    Suggestion regarding Win10 not booting correctly as VM?

    This is where I am so far:

    Windows 10 pre-installed from Wiredzone
    Install ESXi 6 and VCSA
    Updated ESXi 6 to .1a and VCSA to .1
    Using Virtualhere & Raspberry Pi for USB over IP adapter
    Installed Virtualhere client as a service on Windows 10

    ( by the way this work great for usb redirecting using a Raspberry PI.
    Keyboard, mouse, C920 camera etc the ony thing that I have not been
    able to get to work remotely is my Eikon finger print scanner)

    Created RDM
    Created ESXi passthru for VisionTek 7750

    Created custum virtual machine for Windows 10
    HW version 11
    no hard drive
    set vm boot to UEFI
    Added pciHole.start = “2048” to vmx file
    Added in RDM harddrive
    Adden in VisionTec as PCI device

    Window 10 boots into recovery mode but can not recover.

    Check my Win10 disk and it is GPT and has the standard 4 partitions.

    I just received a larger Samsung SSD and wondered if I needed to change
    something in the partitions or if it was a vmware setting I missed.

    Any suggestion would be appreciated. At this point I am starting to pulling my hair

    out.
    I really want to get this running so I can consolidate all the network monitoring servers

    and workstations I have going on one box.

    I started with the "6.0.0.update01-3029758" iso and I'm running esxi on the free license. I found your article for updating to 1a and now running 3073146 which I understand is update 1a. The issue was happening before the update and after the update.

    That is really odd, thanks for sharing Patrick, I'm glad you took the time to do so here. Seems like a different issue. Yeah, requiring an ESXi reboot isn't good. Are you running 6.0 Update 1 or 6.0 Update 1a? I will need to take the time to rebuild my ESXi 6.0 Update 1a on camera, followed by booting my Windows 10 SSD with SSD and GPU passed through, to compare and contrast our experiences.

    Hi Chazz. I have a somewhat similar issue as you mentioned in regarding having to reboot the esxi host to get the GPU working again. I'm using all the same spec hardware as Paul's superserverworkstation except just 32gb ram for the host. I can run for 2-3 days with the GPU passthrough working fantastically. However, the Windows 10 guest gpu passthrough will stop working, but I'm still able to RDP to the it. After that, the only way to get the GPU passthrough to work again is to reboot the esxi host. I'm running vSphere 6.0 Update 1 and tried multiple Radeon drivers. I have the pciHole.start defined and tried adding "pciPassthru0.msiEnabled = "FALSE" and pciPassthru1.msiEnabled = "FALSE" based on googling around, but still have the issue. The vmware.log and Windows event log doesn't seem to give me much info to go on. If you found a solution, I would love to know. I'm loving this superserverworkstation build, just need to figure this little bit out.

    Sorry, I never answered this one, I always have to call in under a customers support agreement, but yep, if you're willing and interested, giving them a ring if to do nothing else than document an odd behavior with RDM passthrough would be a good thing. https://www.vmware.com/support/contacts

    Given you're passing the GPU through using VT-d, I wouldn't think any tuning of the VM's "Video card" would do much of anything, other than get that VMX file re-written, which got you bootable again. I don't believe I really played with those settings, other than PCI hole stuff that actually also turned out to not be essential.
    By the way, oh my goodness, how many times quotes have bitten me over the years, especially when I was on WordPress and my posts would get auto-converted if I didn't pay attention,such as this article https://tinkertry.com/vmware-esxi-5-1-can-run-microsoft-hyper-v-server-2012-vms-nice.
    Know that you are not alone!

    Saw those pics (awesome), I follow you on Twitter :)


    That is quite a bit of work for that SS. Cool stuff.


    The small tweak was just increasing the Video RAM on the built in card via the vSphere interface. It could be done manually, but this worked fine.

    It's in UEFI, not dual.


    That second bit of oddness, there must have been something wrong with the PCI/VT card on the host. I actually tried to do the PCI passthrough to a different VM with the same result. I ended up rebooting the host and it cleared that up. Hopefully a one-time occurrence . . .

    Yes, GPT. And I do have a valid VMware license, but to be honest I've never used the support for something like this. I'm (certainly comparatively) new to VMware. Any pointers? Or is it just as simple as giving them a call/email?

    What PCI devices are you passing through?

    As far as reproducing, I cant' just now, in the middle of a 2TB Veeam backup that'll take a while. I just finished moving 5TB of data to an iSCSI external datastore, then VMware converted it back to a thick provisioned drive, and that took 3 days, so I couldn't reboot or do anything. My own Samsung 850 EVO 2TB SSD is in my laptop right now, but will be going back inside my SuperServer pretty soon (I was doing VSAN testing work until last week with another loaned SuperServer that is now in Brazil with its owner, it's rather fully loaded, I love this picture I snappped https://twitter.com/paulbraren/status/672732331581579264

    I should be able to reproduce what you're asking within a few days, and I like that you found a workaround that is solild for you.

    Curious, what kind of small tweak to your VMX did you make, was it editing the VMX text file manually and resaving, or was is using vSphere Web Client to change the RAM or some other config detail slightly?

    Yeah, it would seem the gist is that I went for many weeks of rebooting the VM without incident. But powering off the VM, would sometimes have an issue booting again, which you also discovered work-arounds for.

    I suspect you're GPT too, and I believe that's the newish part of the VMware BIOS that is likely to be the culprit here. If we went with normal BIOS mode, we'd never be able to boot from this drive if it's beyond 2TB in size someday, so I was trying to be forward thinking. UEFI BIOS mode also makes the ESXi install on USB easy, as you likely also noticed.

    I appreciate your great info, and I'm hoping ESXi 6.0 Update X (whatever comes next) smooths over this issue, but I'll see if I can find a way to report this (EVALExperience has no support, do you have a supported VMware license?).

    b) that Windows 10 install on the Samsung 850 EVO 1TB, was the BIOS in Dual mode or UEFI mode, meaning, you wind up with an NTFS /GPT boot drive:

    As for your comments:
    NVMe
    My strange NVMe settings helped on BIOS 1.0a, but I don't think they're really needed on 1.0b, and you're not booting from NVMe anyway, so unlikely that will matter.

    AMD
    Well, that is odd. I did right-click disable the generic video adapter for the iKVM interface (and did Video offboard in the BIOS)
    https://tinkertry.com/superserver-assemble-configure-install-windows-10
    and thus, was able to have Windows update work fine, with no BSODs after the AMD driver auto-installed. Earlier Microsoft Windows Update AMD drivers were faulty and would BSOD, that that all went away by late August.

    From my troubleshooting, this boot-oddness is a symptom of the RDM of the HD. It's not the PCI PT - I removed both devices and rebooted the host and had the same result.


    I'm curious if you can get a chance to reproduce? Any shutdown of the host puts my Win10 RDM PT machine results in a boot repair loop. About 1 out of every 10 times the Win10 startup repair gets it to boot. Otherwise, ANY change to the VMX/machine settings gets the VM RDM to boot. I've been changing the boot delay to 2000 ms as a default.


    I wish there was a better workaround but that's what I've found so far. On ESXi host reboots or shutdowns, a RDM Win10 machine won't boot unless the VMX is modified in my setup. (Once it boots, I've done dozens of restarts and shutdowns with no issue. It's just that initial time post ESXi reboot/shutdown.) Less than ideal, but still workable.

    f) Booting directly works perfectly. Drivers, everything load without issue.


    However, that little VMX edit that was working to get it to boot seems to be hit and miss.


    I had to totally remove the AMD device and driver from the Win10PT and the VM via vCenter (PCI). Booted, then shutdown, re-added PCI PT to the VM, then booted (with difficulty), but it did eventually and the driver is now fine again. Going to let Veeam do its thing and get back at this again tomorrow.

    (Please, Friday night any answer is fast.)

    a) yes
    b) on a Samsung 850 EVO 1 TB (Amazon)
    c) Yes, RMD mapping and VMX on Samsung 950 NVME*
    d) Yep
    e) Yep
    f) Haven't tried, but will. Pretty sure this will work.
    g) Thank goodness for Veeam, yes.


    * One thing I'm wondering is if this is the issue. I did not set the BIOS settings you recommended for boot to NVME. I'm going to try that. Also, I was using the Windows update AMD video driver but just got a driver error (Code 43 driver had a problem and could not load). Not sure why, but I'll uninstall and reinstall the device and see what happens.

    Sorry I didn't answer faster, but yes, for the rare occasion that happened to me, I did the same workaround, and it booted right up again. Nicely done Rob, glad you found the workaround quickly.
    So, can you confirm:
    a) your BIOS is in UEFI (not Dual) mode, correct?
    b) your Windows 10 was installed to the Samsung 850 EVO (by Wiredzone), correct?
    c) you created a vessel, a VM with RDM mapping, to pass through that HDD?
    d) ESXi 6.0 Update 1a?
    e) Digi for keyboard and mouse over IP?
    f) you can still boot that same HDD in a laptop or natively on the SuperServer, correct? that never breaks in my case, just the RDM mapping of GPT drives seems a little iffy with 6.0 Update 1a.
    g) you are using something like Veeam Endpoint Backup FREE and testing restores, right ;-)?


    Your feedback helps me get a draft I've been working on published, helps enormously knowing somebody else experiencing similar occasional (admittedly temporarily scary) strangeness. Thanks Rob, yes, we are at the leading edge together, but the drive can be booted natively, and the data is never at risk.

    If I make an edit to the Win10 VMware settings (i.e., increasing the video RAM from 4 to 5 MB, it boots. It's like any edit to the VMX and it allows the boot. (The VMX is on the Samsumg 950 NVME.)

    Hey Paul - Just wondering but have you had any trouble booting your Win10 passthrough (PT) machine? I just shut down the Superserver to neaten up some cables and when I booted back up my Win10 PT was on the Automatic repair start screen. I restarted (per the message) and seem to be in a bit of a loop. I've tried shutting down, restarting, etc. and get the "Preparing Automatic Repair" and "Diagnosing Your PC" messages in a steady loop.


    ESXi starts up and runs just fine. Just can't seem to get past this. No changes were made (updates or other). I'll keep troubleshooting but thought I'd take a shot at seeing if anyone else has experienced this.

    True, but it's very workable. It would be nice to have the iKVM as a "true" additional monitor if the VT card is being used as well but the reality is I'll only use the iKVM if something goes wrong or testing. Day-to-day, I'll be using the VT card with multiple monitors.


    The iKVM "should" work better, but this feels like pushing the envelope with the RDM and PCI passing through direct.


    Let me know if you get a chance to try to reproduce it. Otherwise, this setup is still pretty fantastic. Thanks again for all the info!

    But now you don't have video from your VM, don't you need that (along with Digi)?

    Had some time to play around with this. The blank screen or odd video is only when something was plugged into the VT video card. It thinks the iKVM is a second monitor but doesn't quite display it properly. I turned the VGA in the BIOS back on onboard, removed the PCIhole entry from the VMX (was getting some boot issues with it in there for some reason), and unplugged all video cables from the VT and the iKVM works perfectly.

    A video on the pass through would be great. It was a bit of trial and error, but really was a good exercise. The idea of using being able to RDM existing hard drives on failed hardware is especially appealing. Going to have to try that out.


    One funny aside that took me longer than I care to admit to figure out was adding the pciHole.start = “2048” to the VMX. Silly me, I forgot that Linux, etc., does not like the quotes that aren't like this ( " ). Once I changed it to straight quotes the VMX would recognize again.


    I don't need the fourth monitor, so I think I'll disable the VGA as well like you have done. The (minor) issue is that the console screen is black. If I click in there (and switch to a monitor that is connected to one of the HDMI ports) the mouse and keyboard/mouse activate and I can move around the machine. The console works, just won't give me any video. In fact, if I remove the Win10 password, I can see the login and then the desktop, then the video does not change! It's just a nice picture of the Win10 desktop.


    What I wasn't very clear on in my comment (apologies) was where I'm doing this from - I am connected to the console from a separate laptop. (I have the Digi2 as well. All configured for use, but not in use at the moment.) As such, it's kind of a minor issue as I really only envision connecting to the Win10 pass through console for troubleshooting if something goes sideways on me.


    I think I'm going to shutdown the SS and change the BIOS for VGA offboard only to see if that has any effect.


    Safe travels!

    Great to hear from you, and glad to have clear demand for me to get me some videos produced about pass through. Yes, the interactions with the BIOS setting for Onboard vs Offboard are a little tricky.

    If you activate that VGA monitor in Device Manager, you can get your 4th "monitor" going I believe, but I frankly disabled it, sticking with 2 HDMI and 1 DisplayPort for my own needs. You're right about boot time, you set ESXi to auto-start your Windows 10 pass through SSD and Video VM, and you'll see ESXi boot screen for a few seconds, then it blanks out for a while as the GPU is passed through, then Windows 10 shows up. I don't hook anything up to VGA generally.


    At the moment, I'm booted to ESXi 6.0 Update 1a directly, with no Windows 10 VM at the moment, in the middle of a long running sdelete, and about to do some more travel for a couple of days. When I return, I should be able to go back to my SuperServer Workstation setup, and more easily assist.

    I'm confused on (at least) one point though. Do you have the Digi
    https://tinkertry.com/digi-anywhereusb2-usb-connect-over-ip-to-vm
    so that you'll have full mouse and keyboard, even when the iKVM window isn't given focus?

    You're site was/is exactly what I was looking for in rebuilding my home lab. Thanks for all the tips, videos, etc.! My SuperMicro purchase is shaping up to be a great investment.


    Just a quick question - is there a trick that I'm missing to get the vCenter console working properly with the Visiontek card? When connected, I see the BIOS and Windows splash from my Win10 passthrough, but once Windows loads, I get a black screen. Interestingly enough, I have full control of mouse/keyboard, but no video. I'm wondering if re-enabling the onboard VGA in the BIOS broke this. When booting direct to the Win10 machine, the iKVM on the SuperMicro behaves in the same manner (blank screen, mouse/keyboard work). Any hints would be great! And really, keep up the fantastic work!

    I believe you can pass through just one of the NIC ports to a VM, irregardless of whether the ports are onboard or in the PCIe expansion card . I have a setup similar to Paul's original vZilla host.

    smbiosDump | grep -A2 Manufacturer
    :
    :
    :
    Manufacturer: "ASRock"
    Product: "Z68 Professional Gen3"
    Type: 0x0a (Motherboard)
    :
    :
    :

    And on top of it, I have the following additional Intel NIC in the system:

    lspci | grep Ethernet
    0000:04:00.0 Network controller: Intel Corporation 82571EB Gigabit Ethernet Controller (Copper) [vmnic2]
    0000:04:00.1 Network controller: Intel Corporation 82571EB Gigabit Ethernet Controller (Copper) [vmnic3]
    0000:05:00.0 Network controller: Intel Corporation 82571EB Gigabit Ethernet Controller (Copper) [vmnic4]
    0000:05:00.1 Network controller: Intel Corporation 82571EB Gigabit Ethernet Controller (Copper) [vmnic5]
    0000:0f:00.0 Network controller: Realtek Realtek 8168 Gigabit Ethernet [vmnic0]
    0000:11:00.0 Network controller: Realtek Realtek 8168 Gigabit Ethernet [vmnic1]

    One of the ports on the Intel NIC is assigned to my pfSense VM.



    Cheers

    The VIB has arrived!
    https://TinkerTry.com/how-to-install-intel-x552-vib-on-esxi-6-on-superserver-5028d-tn4t

    Hi,


    Can you pass through just one of the i350 ports to a VM, leaving the other to ESXI?


    I'm looking at the X10SDV-F (no 10g ports) but use the box for pfSense as well, with VT-d passing through the physical WAN connection.

    Let's try that picture sharing again, another way.

    Nick, it's been a while, and I should clarify:

    1) You make good points. AHCI M.2 acts kind of like SATA, but NVMe doesn't, I'll have one soon, my Samsung 950 PRO M.2 NVMe, to test things further, since our discussion above is largely guestimates.
    https://tinkertry.com/samsung-950-pro-m-2-nvme-preorders

    2) Here's a lovely schematic:
    https://cdn.tinkertry.com/content/articles/596-superserverpics/SuperServer-System-Block-Diagram.png

    3) 10GbE
    driver site published, but driver itself still not there, VMware support supposed to call me back about this tonight, actually:
    https://tinkertry.com/intel-x552-x557-10gbe-vmware-vib-has-arrived-oct-30-2015

    I've added VT-d limitations to the FAQ/Q&A right up top of the new ordering page https://tinkertry.com/superservers

    where you'll see Slennox has questions similar to yours:
    https://tinkertry.com/superservers#comment-2337932899

    I'm sure I'll find a solution after playing with it a little bit more.

    Admittedly, the reason for my bundle is because it works so well with this particular GPU/Mobo combination. I cannot predict how this GPU would work under ESXi 6.0.0b on a different chipset/motherboard. I can say that yes, this VisionTek 7750 works great for me, even without the pciHole.start edits to the VM's .VMX file, but then again, I don't suspend/resume the VM much either.

    FYI, I'm doing it all over again (creating the same build on a loaned but identical Supermicro server) for a conference I'm presenting at this weekend in Indianapolis:
    https://TinkerTry.com/meetup2015

    I'm sorry, it's just that VT-d (pass through) has been such a picky thing over the years, so I had to just pick one solution that happens to work wonderfully and go to 128GB of RAM, then test the heck out of it:

    http://www.wiredzone.com/supermicro-servers-compact-embedded-processor-sys-5028d-tn4t-10025066?affiliateid=3

    I will likely be doing a new build your own vSphere 6.0 Update 1 datacenter when the time comes, based on this Supermicro SuperServer SYS-5028D-TN4T, to show just how straight forward it is with this wonderful little system. I know that's not the answer you want, it's just there's only so many hours in the day, and a very limited budget for me to spend every 3-4 years, for a new home lab. I can't possibly keep up with all the motherboards out there, and GPU passthrough tends to be tricky for many folks over the years. I'm just so relieved I found something that "just works" and I hope you get yours tweaked to work as well.

    I'm still learning about esxi and I have a question on step 17 of your build. Is this specific to your setup? I'm not familiar with the "pciHole.start" so I was unsure about this setting for my situation, so I moved on to following steps. I have the same card (different server though) and I get a "code 43" error in the device manager for latest driver. If I restart the entire esxi server I can get the screen to come up but eventually after it sleeps for a while it will go black and I can't get it to come back up. If I RDP into it I can see the driver once again shows "code 43". I can't get screen to come up again without restarting esxi again...not a situation I want.


    I reached out to AMD and they instructed me to use Display Driver Uninstaller (DDU) to remove driver and re-install the newest windows 10 driver. Unfortunately my problem still remains. I'm assuming this card works great for you? So I want to see if it's my configuration or I have a bad card maybe? Any help would be appreciated! Thanks.

    Yes I also read that sata is muxed but a M.2 ssd is not a sata device it uses a pci express x 4 slot.
    Is it possible to pass only sata through and not the pci express x 4 based M.2 ssd device?


    Which options can you suggest to get an additional drive that is not passed through and can be used to boot from and be used as a datastore for esxi 6?

    SATA0 is muxed with M.2, so no, the pass through would affect all SATA drives as well as any USB media), see also TinkerTry.com/ordersuperserver#QA

    I visited Intel's booth at VMworld 2015, and have contacts now that are looking into this, and assured me they'll do their best to get back to me, which I'll add to this article:
    TinkerTry.com/vmware-doesnt-support-supermicro-x552-x557-10gbe-yet

    Is it possible to boot from a m.2 ssd and having all the sata's passed through to a vm running zfs (eg using appit) using esxi?


    What about the 10GbE drivers? Any update on this?


    Thanks,
    Nick

    Yeah, can't pass through the SATA ports alone, it takes USB along with it, and all VMFS datastores on all SATA ports are no longer seen. ESXi 6 seems to boot, but things get strange, including read only mode of the USB flash drive that the ESXi is on (changes made to config don't persist through reboots). So not a good option for anybody.

    That issue aside for a moment, what if we avoid it entirely? Would RDM mappings, to make the drives visible to the VM, in your case, zfs (which I haven't played with), work? I don't see why not.

    I do this same technique to make my Samsung 2TB SSD Windows 10 Pro install:
    https://tinkertry.com/mysamsung2tbssd
    visible to my Windows 10 VM (that really just boots that physical SSD). In other words, the VM boots the drive natively, and outputs the video to the GPU (that's using VT-d for pass through). I get about 70% of native drive speed at 4K, and like 95% at higher file sizes.

    Wow, I need to draw this out at some point.

    Here's some more about RDM for now:
    http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&externalId=2046370&sliceId=1&docTypeID=DT_KB_1_1&dialogID=719790756&stateId=0%200%20719804748

    Thanks again for the writeup. I really want to pull the trigger on this board, but I'm hesitant about the VT-d gotcha. One of my requirements would be to pass sata through to a NAS vm in order to create a zfs raid/NFS share. I'd mount the NFS share back into esxi for datastores for the rest of the VMs. If I understand your gotcha correctly, passing through the sata would also take the usb along with it. How would I boot ESXi then (I was planning a USB boot)? Where could I store that initial VMDK at for the NAS VM? I'm not sure any of this is possible give what you've mentioned.

    Also, if you pass through sata/usb, does that take the pie x16 with it as well?

    Only 0.15TB written so far though...seems we're both having a lot of fun, next step, getting it ready to get work done too...

    All is going very well with it so far!
    https://TinkerTry.com/mysamsung2tbssd

    How's the 2TB drive faring in your SuperServer? I'm thinking of getting one as well... (When they are available in South Africa) My next challenge would be to find the NVMe SM951 (512GB) model from an online retailer.. This little box is potent - loving every second I get to tinker

    Yes, 2 10Gbe and 2 1GbE ports, and no, ESXi 6.0 support for the 2 10GbE ports just isn't there yet, see also:
    https://TinkerTry.com/vmware-doesnt-support-supermicro-x552-x557-10gbe-yet
    and also:
    https://TinkerTry.com/ordersuperserver
    where I wrote:

    Q: When will Supermicro support the 10GbE ports for ESXi 6.0?

    A: Supermicro support is very aware of this issue, I have communicated with them directly. Partly based on my feedback, they quickly fixed their website at product launch in late June, to reflect that the 10GbE VIB (driver bundle) is not yet available for ESXi 6.0 owners. All other components, including health monitoring, work great out-of-the-box with ESXi 6.0. Supermicro expects this issue to be resolved by Intel soon.

    TinkerTry Tip: For now, you can use both 2 1GbE ports and 2 10GbE ports with Windows, and with ESXi 6.0, you can use 2 1GbE ports, and pass the 2 10GbE ports through to the Windows 10 VM. Nice!

    Okay. i am in love with this one and i just checked on wired zone and it will cost 2800 for this configuration. my question is. does this comes with dual 10G ports and Dual 1 GB Ports. that what i seem to understand from the picture. in addition, are the 10GB NIC's now working or are there any working VIBS for them.

    Peter, this is great feedback to hear, and I appreciate your taking the time to share it. Certainly looking forward to hearing how things go!

    It has arrived! What a magical server this is... Barely an inch deeper than my old HP MicroServers... With more oomph than all 3 of them combined! I'll be reporting back shortly on my experience with it!

    Ok, great. Thanks for the quick reply and clarification, and thanks for doing such great write-ups!

    Back when I wrote this, I was thinking a PCIe NVMe drive would be the 8th drive.

    As for the 7th drive, I hadn't yet discovered that when you install something in the M.2 slot, it takes priority over whatever is attached to SATA0, and that SATA0 device is then ignored. That connection is shared, aka, muxed, sort of mentioned in the manual page 4-21
    "The M.2 is mux with I-SATA0 port for legacy SATA SSD devices." http://supermicro.com/manuals/superserver/mid-tower/MNL-5028D-TN4T.pdf

    So you're right, only 6 SATA devices active at a time. The I had come up with 8, if a PCIe NVMe drive were installed. Explained a bit better over here:
    https://TinkerTry.com/ordersuperserver#StorageLayout
    I should write up a whole new post about this, for now, got to finish the final rebuild of my home lab this weekend (Windows 10 Pro from scratch (8.1 drive was corrupt), and vSphere 6 from scratch, re-import all the existing VMs, then back to work.


    But you're right, the title was unintentionally misleading, and I have now fixed it. Thank you, B., for catching this!

    I'm slightly confused (but very intrigued). Where do you get the 8 drive count? I see 6 sata ports and one M.2 slot.

    Well, having some drive issues, so I dove in head-first, and just plunked down for a 2TB SSD at Newegg at http://fave.co/1OqX471 (not in stock at Amazon at http://fave.co/1LCiJvk ), arrives Monday, Jul 27th.

    Next stop, creating a legit ISO of Windows 10 build 10240...should be interesting!

    That's a great idea - testing the other platforms in a nested VM... I've wanted to try KVM in-depth - but hadn't decided which base OS yet. The joy of getting my server will be to have ample resources to put the various options through their paces and decide on which core to use. Will start on CentOS and also unRAID as my firsts and move on from there... As for USB over IP - a cheaper solution for me would be to use a Raspberry Pi (http://store.raspberrypi.com/projects/virtualhere) as my host device.

    I only got this all working nicely a hours before the blog post, and then had a bunch of travel, so I haven't yet had a chance to test out other hypervisors. I'm inclined to simply nest those under my ESXi rig anyway (such as Hyper-V)
    https://TinkerTry.com/vmware-esxi-5-1-can-run-microsoft-hyper-v-server-2012-vms-nice
    Which hypervisor did you have your eye on?

    I hadn't really thought it through either (they keyboard issue), but that made figuring it all out much more fun ;-)

    Articles like this:
    http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1033435
    but done on ESXi instead wouldn't work because I wanted a solution that connects even if I'm:

    1) booted natively to Windows 10

    2) running a VM with the same keyboard/mouse attached

    3) running a VM that hasn't logged in yet, so I needed to make a service out of consumer-focused USB Server
    http://sysnucleus-blog.com/2015/01/16/run-usbdeviceshare-as-service/

    4) still working on testing which device is most reliable, I'll probably also be testing out this more practical 4 port model:
    http://amzn.to/1SvhuN5

    I guess you do have Amazon shipping to South Africa:
    http://www.amazon.com/gp/feature.html/ref=amb_link_428249142_2?ie=UTF8&docId=1001120111&pf_rd_m=ATVPDKIKX0DER&pf_rd_s=merchandised-search-3&pf_rd_r=1586ZWH3M1494Q5JF8BZ&pf_rd_t=101&pf_rd_p=2045318642&pf_rd_i=230659011

    no idea if the shipping fees are affordable though.

    Thanks for that. I suppose I've never tried to pass through a keyboard or mouse on ESXi so I would only have discovered this issue while tinkering one night - thanks for saving me the hassle! Have you tested other Hypervisors yet? I will be running VMware first too but may venture over to KVM to play later. Do you have any notion on when the Samsung sm951 NVMe will be available? I'm tempted to get the AHCI version so long but my impatience may cost too much.

    My page loads are acceptable - I suppose I am just accustomed to the +-300ms latency (lag!) on the majority of the websites we visit from SA. If you are looking for a CDN however, I know that CloudFlare fired up a few servers late last year. Otherwise RSAWeb offers their own CDN but at a premium.

    By the way, I'm trying to find a CDN that has good service in South Africa, for now, I'm still on AWS (Amazon Web Services), who don't have a point of presence near you yet, last I checked. I hope your page load speeds are acceptable, especially for image-laden articles such as https://TinkerTry.com/superserverpics

    So cool to hear from somebody so very far away, so glad you found TinkerTry!

    The onboard Intel RST based software/hardware RAID controller can do RAID5 for example, but only under Windows 10. So yeah, it would seem that creating a NAS while the BIOS has the drives in SATA AHCI mode (instead of RAID mode) would be a good fit for many folks, but I have not tried that.

    FYI, the marketing material at Supermicro
    http://www.supermicro.com/products/system/midtower/5028/SYS-5028D-TN4T.cfm

    has the following use cases in mind, for this Intel Xeon D-1540 chipset:
    Key Features
    • Space-efficient, compact design
    • Network Security Appliance
    • Cloud and Virtualization
    • High Performance NAS Servers
    • Business Critical Applications
    • Small and Medium Business



    You ask about USB, turns out ESXi 6.0 (like 5.5 before it) doesn't let you attach things like USB keyboards and USB mouse dongles to the VM, only things like mass storage and printers. So it turns out the USB over IP adapter is very necessary, for controlling this fast VM that has the disk and AMD GPU both passed through to the VM.


    I hope this helps a bit, and I can't wait to hear how things (eventually) turn out for you!

    Once again - awesome read and so amazingly informative for someone about to embark on a similar quest. I've still got 2 weeks to wait for arrival here in South Africa, so I will spend my time doing all the homework needed. One question I have is - what would be the best approach to create a redundant storage array if the PCI-E slot is being used for the Display card? Would I RDM 4 SATA disks through to an unRAID VM? FreeNAS? Any suggestions? I know that once I have the unit I will have all the time in the world to discover this - but in the meantime I must settle for building this in my head. Also, is it necessary to have the USB over IP adapter? Can I not just assign a USB input in ESXi?