What fits in any home virtualization lab, has 8 Xeon cores, 6 drives, 128 GB memory, and 3 4K outputs from a Windows 10 VM? Your new Supermicro SuperServer Workstation!

Posted by Paul Braren on Jul 15 2015 (updated on Mar 31 2018) in
  • ESXi
  • Virtualization
  • HomeServer
  • Imagine an efficient little server that can double as a workstation. I don't mean one or the other like that dual-boot nonsense. I mean both. A whole hog VMware vSphere 6.0 Datacenter, along with a fancy schmanzy Windows 10 workstation, with a dedicated keyboard and mouse. Simultaneously. After you OMG, hold off on the LOL, until you actually read the rest of this success story.

    Aug 19 2015 update - This pre-configured bundle is now available! Assembly, BIOS configuration, Windows 10 install procedure also now available here, demonstrated in a new video here.

    This quiet little (8" x 9" x 11") mini-tower can be stuffed with 6 drives total, including the latest-fastest-littlest NVMe drives, right in that included M.2 slot, or PCIe NVMe.

    Idles under 40 watts, but even with heavy loads, and all 4 3.5" hot-swap drive bays full, it only gets that up to 85 watts or so.

    I'm not done yet. Remember my article about how ESXi is usually run head-less?

    What if you could get a half-height PCIe graphics card in there, with 2 HDMI and a Mini DisplayPort right on that little backplate. Not just an anemic old card. A modern card with enough GPU grunt to handle 3 4K monitors, with audio. Yes, that's 25 million pixels total. At a cost of about 50 additional watts, with the 250 watt power supply still not even half "full," and those CPU, PSU, and chassis fans still not maxed.

    2177159678

    How can this be? Well, it just so happens the Supermicro SuperServer leverages Intel's pretty sweet recent innovation they call SoC (System on a Chip). It's where you jam a lot of components onto a 6.7" x 6.7" (15 cm x 15 cm) motherboard by incorporating a lot of the watt-burning stuff you usually add on later via PCI, such as 2 VMware-friendly Intel i350 1GbE ports, and 2 10GbE ports. Putting it right on the mobo increases efficiency.

    None of that legacy junk cluttering things up either, or wasting watts. Yes, time to leave IDE, serial, parallel, and audio ports and chips behind.

    First Intel engineered all this datacenter goodness right on that little mobo that could, then they apparently tuned the heck out of it all, ensuring the overall component package with the CPU permanently attached uses as few watts as possible. Yes, that includes that roughly $800 8-core Xeon D-1540 CPU. It's smart enough to sip power when idling, heading south of 1GHz when there's nothing much going on. As soon as some cores feel demand, they're able to instantly jump right up to 2.4 GHz of turbo.

    SMCI_X10SDV-TLN4F_Angled
    Click on the image for a much closer look.

    So, which company picked up on the promise of this nifty little pre-assembled motherboard/CPU combo? That's 20 year server veteran Supermicro. After years of the home virtualization scene dominated by a variety of white boxes, Mac Minis, Intel NUCs, and various other 16 GB or 32 GB memory maximum systems, Supermicro serves this little guy up that's just dripping with awesome sauce, making it SOOO much easier for me to pick what I'd recommend for your virtualization lab. Honestly, I've been waiting for this opportunity for over 4 years, for the chance to finally have a new server I can recommend.

    The Supermicro SuperServer 5028D-TN4T arrives pre-assembled in a lovely little chassis, usually directly from Supermicro in San Jose, CA. All you need to bring to the party is some new DDR memory and disks, it's all available here, for example.

    Thanks for the memories.

    benchtesting.JPG
    Bench testing, looking at watt burn with Kill A Watt EZ.

    Why is it I keep mentioning virtualization? Well, what do home virtualization lab enthusiasts like myself tend to run out of first, when running many useful VMs 24x7 for a few years? Memory!

    You remember the old days, when you bought DIMMs, then tossed them aside for newer bigger DIMMs a few years later? Forget that. How about this time, you invest in your IT career, and your home lab, by getting 2 modern groovy ECC DIMMs now, weighing in at a hefty 32GB each. Yep, you heard right, that's 32 friggin' GBs each, for a total of 64 GB today. Oh, at a modest 1.2 volts.

    Wait, there's more. You can still get 2 more 32GB DIMMs later on, for a total of sweet glorious virtualization-friendly 128GB of RAM, in a home server? Holy crap!

    Ok, you're wondering about the price right around now, reserving your excitement, wanting to get the bad news about price. Luckily, it's not as tough to stomach as you might think, especially if you're at all serious about your virtualization needs these next 3-5 years. Hang on just a little longer.

    A serious workstation too.

    Here's the kicker. Why the heck am I talking about adding a video card to your home virtualization server? Well, perhaps you simply can't afford not to. In other words, you'd like to also be able to use this system as a workstation. That's exactly what I set out to do recently, to replace my suffering triple-monitor-attached laptop and a rats nest of cables with one sweet little box. I've done it, and it all works! Even better, the installation of

    • Windows 10 (GA coming by July 29th!), as I described here
    • VMware ESXi 6.0
      was all rather straight forward, either on the bare-metal, or in VMs.
    iKVM-showing-ESXi-6

    Hallelujah, it's about time! Thank you Intel. I know you had IT providers, datacenters, and small business mind with this little guy, but wow does this little server rock, for use in the home's of the people that work at those companies. And wow, what a performant NAS this could also make.

    Thank you Supermicro, for being first to step-up and put this into a chassis that fits right at home. Extra bonus that Supermicro already happens to be the step-up darling for home labs (see Serve The Home), with wonderful Remote Control over IP (iKVM) that rivals HP iLO.

    VisionTek-card-installed.JPG

    The Visiontek Radeon 7750 900686 is available at Wiredzone, Amazon, or Newegg.

    Thank you VisionTek, for that nice little video card that (barely) fits. Windows 10 auto-installs your AMD drivers, and you're good to go, with me testing it on my 2560x1440 DisplayPort Nixeus 27D monitor, and future proof for some better panels down the line.

    There's more good news, if you're willing to get creative...

    What about that day you wish to take your ESXi off-line for whatever reason, presumably after vMotioning your VMs to another ESXi host (server) somewhere. Now what? How do you get work done on your beloved SuperServer?

    Guess what. That SSD you have in this system can be booted exactly as is, not only as a VM. Yes, it boots both ways, the same Windows 10 instance, the same data on there.

    • Boot Windows 10 up (with no ESXi 6.0 running)? Sure. A workstation that just works, as a workstation.

    • Same Windows 10 booted as a VM? Yep, you can do that too, just like VMware Fusion does on a dual boot Mac.
    TinkerTry-pic-of-5028D-low-angle

    Yeah, that just happened, you read that correctly. This little bonus perk takes just a bit more work, and 2 lines to type into your ESXi via PuTTY (command line) to get the RDM (Raw Device Mapping) configured, but don't worry, I'll document how. I've already got a pretty big hint right over here. Yes, a second drive that your VMs live on is needed, just for the VM config files such as the VMX file, alongside that tiny magical pointer file that tells the VM how to find your actual Windows 10 workstation SSD.

    Heck, you could even slap that SAME 2.5" SSD into a laptop and take it on the road if you really wanted too, then pop it back in upon your return. Oh my, this just keeps getting better.

    What's the catch of dual booting a single SSD this way? Licensing might give you some issues, I'll just have to wait-and-see, once Windows 10 actually arrives.

    I'm so glad I've finally been able to turn one of my old articles into a moot point, thanks to Intel, Supermicro, and a bit of tinkering. Which article? Little secret those new to virtualization often miss - ESXi 6.0 continues to be mostly headless, just as it was for all prior VMware hypervisor releases.

    Nov 02 2015 Update - VMware support for 10GbE has arrived!
    Optionally, since you cannot see the 10GbE interfaces natively from ESXi 6.0 quite yet, you can use VT-d, aka DirectPath I/O, to passthrough those 2 10GbE interfaces to your Windows 10 VM, if you want. Yep, that just happened, it works, and works well. Here's the names Device Manager comes up with:

    My-two-AMD-PCI-Devices-DirectPathIO

    Ethernet0 vmxnet3 Ethernet Adapter
    pciPassthru0 Intel X552/X557-AT 10GBASE-T
    Ethernet 3 Intel X552/X557-AT 10GBASE-T #2

    Notice it even simply calls it pciPassthru0, interesting!

    I wound up sticking with a vmxnet3 NIC type when running as a VM, and a normal I-350 driver loading for when I'm booted natively to Windows 10. That means I only had to pass the GPU through, so that Windows 10 could automatically download the AMD drivers.

    Imagine once 10GbE does arrive, the power of a 3 node cluster for some amazing 10GbE vSAN speeds here, without the complexity of Infiniband.

    Some minor gotchas to be aware of, for those using Windows 10 as a VM.

    • The keyboard and mouse pass-through to the VM was straight-forward, once I found that the SIIG US over IP 1-Port works fine, coupled with any 4 port USB 2.0 hub you might have. I don't notice latency/lag, admittedly, I don't plan on much gaming either.

    • Initial set up of this server will require another PC temporarily, but once you set this Windows 10 VM to autostart with ESXi 6.0, you're good to go.
    thumbnail
    • BIOS - you can use another PC's browser, or even the mobile Supermicro IPMI app, to use iKVM (remote console) for BIOS access or ESXi re-install.

    • With this SoC (System on a Chip) design, it's not possible to take just the USB 3.0 controller and pass it through to the VM, because it passes everything else too (SATA, USB 2.0, etc.), see schematic here.

    • You'll want to have good backups of everything, to another system or NAS, but that's true for any systems you build.
    EricS-testimonial
    Thank you so very much Eric, and I sure look forward to testing a Veeam Repository as a backup target soon!
    • The fan in on the added GPU card is a bit louder than all the rest, even at idle. This keeps the GPU heatsink cool to the touch, even after long benchmarks with the chassis cover on. If noise is a concern, I have found and tested a way to get this little beast about 20' from where you've got your monitors. That's right, you can situate it in another room, yet still be able to power cycle, access USB 3.0 devices at full speed, and more. I will also be testing the effectiveness of the Computer & XBox Noise Reduction Kit - Dynamat Xtreme 40401.

    Aug 15 2015 Update - The Dynamat works very well, see comment below a related article here.

    Detailed build procedure:

    Aug 19 Update - Assembly and Windows install procedures are now available here. I don't have an ESXi 6.0.0b upgrade procedure documented yet, meanwhile, here's the gist, likely to eventually be published as a second article.

    1. assemble, attaching power, IMPI port, and at least ethernet port (bottom left)
    2. install VisionTek 7750 3x4K graphics card, available from Wiredzone, Amazon, or Newegg.
    3. install SATA DOM for ESXi 6.0.0b if you'd like, or just put it on a USB flash drive, which I prefer (leaving the valuable SATA port for bigger storage options)
    4. power up
    5. find the recent DHCP leased to IPMI
    6. use another PC to remotely access that IP over Web UI
    7. start iKVM
    8. install Windows 10 on local SSD
    9. before Windows Update can auto-upgrade GPU drivers, download and install AMD Catalyst Driver for Windows 10 64-Bit 15.7.1
    10. mount ESXi 6.0 Hypervisor ISO
    11. use F11 to boot from that ISO
    12. install/configure ESXi
    13. configure pass-through for the only two AMD devices seen, reboot ESXi
    14. create a datastore
    15. create the RDM mapping to that SSD that has Windows 10 on it
    16. create a Windows 10 VM, using the RDM drive for boot device, and VMXNET3 for network
    17. add pciHole.start = "2048" to the end of the VM's .VMX file
    18. install VMware Tools, reboot VM
    19. attach 13 port Anker USB 3.0 hub
    20. add USB 3.0 devices you add to that hub, and map those to your VM, as your needs dictate (keyboard and mouse won't be available yet)
    21. RDP to that VM
    22. disable the VMware brand Display adapter (in Device Manager)
    23. add Silex USB over IP 1-Port DS600 device to your network
    24. add USB Server to your Windows 10 VM, right-click mouse and keyboard to auto reconnect after reboot
    25. make a service out of this USB Server, so that your mouse and keyboard connect even if you reboot and aren't logged in yet, procedure here
    Panoramic-image-of-paul-braren-displays-jul-2015

    I'm headed on a road trip to VTUG Maine this week, so I'll need to save the rest of the details of this super fun project for another day, including video of the installation of the video card, for example. And note, I "only" have about 7.8 million pixels currently pumped to my 3 monitors (2 1920x1080, and 1 2560x1440), but good to know there's room to grow ;-)

    Stay tuned!


    What it'll cost ya.

    sys-5028d-tn4t_open-cropped
    Wiredzone, Supermicro Authorized Reseller

    The Server

    Now for the pricing...

    [AUG 03 2015 update - a custom, pre-configured bundle is now available! Details below.]

    Supermicro SuperServer 5028D-TN4T
    at Wiredzone

    • About $1200 USD for the bare bones system with 3 year warranty, just add memory and drives.

    • About $1828 USD if you want Wiredzone to include 2 of the only Supermicro-approved Samsung 32GB DDR4 DIMMs, with room for 2 more.

    Not available on Amazon or Newegg, I got my system (CPU/mobo/power/mini-tower pre-assembled) at Wiredzone for the reasons outlined here. If you appreciate the information and videos you've found here at TinkerTry, and you decide to buy, please consider using the above link.

    I've pulled together an Amazon shopping cart, see also more about the accessories over here.

    The Add-on Parts

    The Video

    Assembly, BIOS config, Windows 10 install procedure.
    Close look at the Supermicro SuperServer 5028D-TN4T vSphere Datacenter/Workstation Hybrid.


    AUG 03 2015 Update

    New TinkerTry bundled server/workstation now available!

    So here we go, now that you're a fully informed reader, ordering this very special bundle means that what I have will be identical to what you have, ensuring:

    • your experience will much more closely match this blogger's first-hand experience
    • our ability to share tips and tricks for years to come.

    This is a SuperServer/Workstation combination that I can very comfortably recommend, with a price point that's a lot more realistic than buying from the bigger companies generally more focused on 24x7 support than energy savings. This box fits the home server bill very nicely.


    Nov 01 2015 Update

    New TinkerTry Bundles now available, all listed here:

    This is partially a response to this admission, about the leading-edge Bundle 1:

    This system is ready for self-install of VMware vSphere 6.0 with (future) TinkerTry article guidance, running your same Windows 10 SSD as a VM. If you want your ESXi and your Windows 10 SSD running concurrently, VMware ESXi hypervisor passthrough skills are required, along with a separate USB-over-IP switch, and a willingness to occasionally use a second system for initial configuration/administration. I realize a leading-edge combined SuperServer Workstation (that I use extensively) is not for everybody. I suspect many proud owners are dual booted theirs.

    Alternatively, the Bundle 1 can be used as a pure Windows 10 workstation with 3 4K video outputs, of course.

    Read all about the Digi that makes this all work here:


    May 07 2017 Update

    • On VMware ESXi 6.5.0d, attempts to pass through the AMD 100-506008 Radeon Pro WX 4100 4GB Workstation Graphics Card GPU card were a modest success on BIOS 1.1c, working about 50% of VM boots, with no edits to the VMX file required. I then noticed the passthrough broke after the move to BIOS 1.2, so I re-specified the hardware to passthrough by ESXi, rebooted, then pinned those 2 AMD devices to the Windows 10 Creators Update VM. But every boot now fails to see the GPU at all. The VisionTek 7750 included in Bundle 1 SuperServer Workstation continues to work with BIOS 1.2, as it had in BIOS 1.1.

    Disclosure: TinkerTry makes a modest commission on each Wiredzone sale only if you use one of the affiliate links found at TinkerTry. Wiredzone is an authorized reseller that charges very competitive prices. Please consider sharing the URL TinkerTry.com/superservers. This source of web site funding goes directly into delivering more value to enthusiastic fans, and it sure beats complete dependency on advertisements. No sponsored posts, and all relationships with any vendors disclosed. All hardware and software was purchased, and any rare exceptions (loaners) are clearly mentioned. I'm a very discerning buyer who is relieved to finally have a highly-upgradeable virtualization server that is also widely available. The basis of years of fun and interesting articles to come. This common platform helps us to help each other more effectively, reaping maximum benefit from such a significant mutual hardware investment.


    Mar 31 2018 Update

    Here's AMD's very detailed, albeit Windows 7-vintage write up on doing passthrough, but these cards don't actually have video outputs, it's instead intended for Horizon View PCoIP applications/thin clients:

    MxGPU-Setup-Guide-VMware
    • GPU Setup Guide with VMware®

      2.1 Hardware Requirements
      2.1.1 Host/Server
      Graphics Adapter: AMD FirePro™ S7100X, S7150, S7150x2 for MxGPU and/or
      passthrough
      ***note that the AMD FirePro™ S7000, S9000 and S9050 can be used for passthrough
      only
      Sample of Certified Server Platforms:
       Dell PowerEdge R730 Server
       HPE ProLiant DL380 Gen9 Server
       SuperMicro 1028GQ-TR Server
      Additional Hardware Requirements:
       CPU: 2x4 and up
       System memory: 32GB & up to 1TB; more guest VMs require more system memory
       Hard disk: 500G & up; more guest VMs require more HDD space
       Network adapter: 1000M & up


    See also at TinkerTry

    B01BGTG41W

    See also

    sth-review-screenshot

    BOTTOM LINE:
    With four port Ethernet (two 10Gbase-T and two 1Gbase-T), solid storage m.2 PCIe x4 and 6x SATA III, a fast and low power CPU (Intel Xeon D-1540) and 128GB of RAM, the Supermicro X10SDV-TLN4F is a must get platform. For those still using Intel Xeon L5520 or L5620 generation processors, one can get more performance in less than half of the power and space footprint which is astounding. For those that always wanted more than the Xeon E3 line could offer in terms of their limited RAM capacity (practical 32GB limit) and core count (4C/ 8T max), this is the answer.