Superguide: SuperServer home virtualization lab storage tiers, platinum through bronze, how many efficient drives fit inside this tiny chassis?

Posted by Paul Braren on Sep 25 2015 (updated on Jun 16 2016) in
  • Storage
  • HomeServer
  • HomeLab
  • Virtualization
  • Windows
  • Superguides
  • You can go with RAID for increased resilience and/or speed. Or you can go simple JBOD, and simply use the magic of VMware's Storage vMotion to move VMs among tiers at will, depending upon your needs for which speeds.

    SuperServer_5028D-TN4T-opened
    Click to see more.

    I have chosen the latter approach for my Supermicro SuperServer 5028D-TN4T. Note that the embedded Intel RST is capable of doing basic Intel RST RAID0/1/5 hardware config./software install combo.

    Future articles will cover my onsite backup and offsite replication strategies, to help compensate for my lack of RAID. Frankly, I don't miss RAID, choosing to invest in efficient next-gen storage, rather than relatively slow (and hot) storage controllers. When you can have well over 1000 Mb/s for reads and writes on a tiny M.2 chip, it's kind of hard to go back.

    Samsung-SM951-M.2-inside-SYS-5028D-TN4T-edited.JPG
    Samsung SM951 M.2

    I am currently testing a Superserver with both the M.2 socket and the PCIe 3.0 slot occupied, running smoothly, concurrently. For the time being, my loaner test rig is blessed with the fastest (and only) consumer products out there, in these two new form factors:

    Intel_750_installed_in_Supermicro-SYS-5028D-TN4T-edited.JPG
    Intel 750 Series NVMe.

    What this testing has given me is a clear path forward, as far as my home's virtualization lab storage configuration, all running under VMware vSphere 6 / ESXi 6.0 Update 1, with plans to run Hyper-V too. With 128GB of RAM, plenty of room for nesting testing.

    Special thanks to a big assist from home virtualization lab guru Andreas Peetz of VMware Front Experience. As far as I can tell from the Googles, he assured the world that ESXi 6.0 is NVMe ready out-of-the-box, get this, before VMware did! His amazingly practical AHCI article helped me with my vZilla, and going forward, helps anybody with an M.2 device that isn't initially visible under ESXi 6.x, as long as you disable ipv6 before running his super simple one-liner and reboot. Thank you Andreas!

    IMG_8535editedserial.JPG
    Foreground SuperServer has SM951 M.2 installed, background SuperServer has Intel 750 NVMe installed.

    I'm already running VMware ESXi 6.0 Update 1 with these tiers of service, also known in the enterprise world with similar terms like Storage QoS, Storage Classes, and Storage Containers. Basically, this is the simple version of all that, at obtainable prices. Yes, an investment in my IT career, using mostly drives I had already bought. This rig is well beyond what's needed for mere VCP exam prep. It's also the foundation for years of TinkerTry articles to come, as I tinker with software running at the speeds I need, on one of the fastest and most versatile home labs out there. Even better, one that anybody can buy.

    Pay attention to the watt burn at idle specs I list below for each drive. For a given amount of data to move around, since SSDs get their work done faster, they'll be spending more time at idle.

    M.2-and-NVMe-speeds-compared-side-by-side-tinkertry.com
    Informal ATTO speed test, done from Windows 10, using latest UEFI BIOS 1.0b, and the latest SSD firmware.

    Figured it might be fun to share my configuration as-is, assembled over time with storage pulled from vZilla, even though it isn't quite done yet. Well, is anything we accomplish ever really done?

    What I mean by almost done is that I'm currently using my Superserver vSphere 6.0 lab as a workstation too. That means my one PCIe slot is occupied by a (triple-4k-output) GPU. So I probably won't be investing in a PCIe storage for now. That means the only piece missing in my "production" SuperServer, that I bought, is the next month's release of the Samsung 950 PRO M.2 NVMe. Yes, that means that for TinkerTry, and anybody else ready to jump to next generation storage, Platinum is the new Gold!

    Please drop your comments/questions/concerns below.


    0.5 TB (512 GB) capacity MZ-VKV512, 100% VMFS datastore
    V-NAND, M.2 2280 interface, uses NVMe, no SATA or AHCI

    950PRONVMeM2
    • Samsung 950 PRO (M.2 NVMe)
      5.7 watts average, 7.0 watts peak, 0.07 watts idle (source).
      2,500 MB/s reads, 1,500 MB/s writes - manufacturer specs.
      Speed tests (not yet).
      This device is next generation storage, leaving both the SATA3 connector and the AHCI protocol behind. What I really want to see is random workloads at 4K, which helps indicate how much faster this will "feel." Before this, we only had the hard to find OEM-only Samsung SM951 that still uses AHCI. Finally, we'll have wide availability of M.2 NVMe. The future is coming soon! And if I'm willing to give up that GPU in my one PCIe slot, products like the Intel 750 PCIe NVMe could also be pretty interesting, with the potential for higher capacities.

    While there is NVMe support bundled with ESXi 6.x, the 6.0 Update 1 driver was found to be slow with Intel's 750 Series PCIe NVMe SSD, sped up tremendously once I replaced it with the right driver. So let's hope it performs well with the native driver, and if not, hope that Samsung/VMware releases a driver VIB to let it shine. That's not a given, since this is really a consumer drive in a enterprise role.

    Jan 03 2016 Updates:

    1. Good news, the Samsung 950 PRO M.2 NVMe drive is readily available, got mine in October 2015. It "just works" with ESXi 6.x, no configuration or manual VIB (driver) install needed. I've tested using the built-in NVMe driver that ESXi 6.0 Update 1a includes, and it's showing excellent speeds.

      ATTO-on-840-evo-msata-1tb-with-flash-read-cache-enabled
      ESXi 6.0 Update 1a running a Windows 10 VM on Samsung 840 EVO mSATA VMFS datastore, with reads accelerated by Samsung 950 PRO as Flash Read Cache.

      Here's some disk benchmarks and some file copy tests done under Windows 10. I haven't done many VMware tests yet, but I can say that deploying a Windows 10 VM from a template in 12 seconds feels pretty awesome. Be sure to also click on the ATTO Disk Benchmark at right to have a closer look at one brief "what if" test I did, helping confirm that the 950 PRO is pretty happy running as Flash Read Cache in a vSphere 6 home lab. I know I'm very happy to have it!

    2. As Mathieu kindly pointed out below, my original plan:
      0.5 TB (512 GB) capacity, 75% VMFS datastore, and 25% Flash Read Cache couldn't be implemented, confirmed in my reply. This Platinum tier of this storage strategy article now revised above, instead calling out that I'm now using 100% of the 950 PRO for VMFS. Yes, my home lab has no Flash Read Cache, for now. I'm looking into using another one of my SSDs for this.

    2 TB capacity, Windows 10 VM-pass-through
    3D V-NAND, SATA3 interface, uses AHCI

    Samsung850EVO2TBWindows10OEM
    • Samsung 850 EVO (MZ-75E2T0B/AM)
      4.7 watts average, 7.2 watts peak, 0.06 watts idle (source).
      540 MB/s reads and 520 MB/s writes - manufacturer specs.
      Speed tests of 559/535 at TweakTown.
      Windows 10 Pro 64 bit OEM preload from WiredZone, as part of the virtualization-ready bundle. I'm using a simple RDM mapping to pass this SSD right through to a VM, but occasionally I travel, so this SSD runs in my laptop just fine, read details here.

    Why is it actually good to have a dedicated driver just for my "daily driver" VM which is serving as my workstation? Because I choose to drive I/O through that device (rendering videos, copying files, edited pictures). No other VMs are touching this datastore in any way. Thus, performance is consistent.


    1 TB capacity, VMFS datastore
    Toggle-mode DDR NAND, SATA3 interface, uses AHCI

    mSATA-1TB-Samsung-840EVO_with_enclosure

    0.25 TB capacity (256 GB), dedicated to read cache
    Toggle-mode DDR NAND, SATA3 interface, uses AHCI

    Samsung830-with-caddy

    6 TB capacity, VMFS datastore
    SATA3 interface, uses AHCI

    WDRed6TB

    4 TB capacity, VMFS datastore
    SATA3 interface, uses AHCI

    B00FQH7MQ2
    • Seagate 4TB SSHD (ST4000DX001)
      7.4 watts average, 6.2 idle (source).
      180 MB/s reads and 180 MB/s writes - manufacturer specs.
      Speed tests of 178/178 at TweakTown.
      This is one of those hybrid drives, with a bit of a flash cache for a speed boost, automatically (no special drivers). Role will likely be for Windows Server 2012 R2 Essentials VM's D: drive, dedicated to network shares.

    4 TB capacity, VMFS datastore
    SATA3 interface, uses AHCI

    B00B99JU4S
    • Seagate 4TB HDD 5900 RPM SATA (ST4000DM000)
      7.5 watts average, 5.0 watts idle (source).
      142 MB/s reads and 142 MB/s writes - manufacturer specs.
      Speed tested of 165/156 at TweakTown. For general purpose VMs and "swing space." This is the slowest of the spinny drives, but is very useful for those occasions where I need a "spare" drive. This is not a hybrid, just a traditional spinning 3.5" drive. Should a drive fail, or should I wish to evacuate one of the primary datastores, it sure is handy to have some place to put it, even if only temporarily.

    BLACK

    6 TB RAID0 capacity, VM datastore
    Gigabit interface, NFS and/or iSCSI

    BC214se_2300

    So, how many drives can you fit, total?

    superserver-system-block-diagram
    Click this image above, to see it full size. From page 11 of the Instruction Manual

    Ok, if you're paying attention, you'll realize that it turns out it's true, while the SuperServer 5028D-TN4T has "only" 6 Intel SATA3 speed connections, that M.2 isn't SATA, so that's 7th drive. And I happen to be using a GPU for now in my PCIe slot, but guess what, if you go with NVMe in there, that's the 8th drive!

    Answer: You can fit 8 drives into less than a cubic foot!

    It actually all runs quite cool, even with the lid off! Oh, and if you're counting the total capacity, that's roughly 18TB of storage in all, not counting the NAS.

    If you're wondering where I installed ESXi 6.0U1 to, that's off a 1GB portion of a diminuitive USB flash drive, at a cost of about 12 bucks.

    Yes, you can have both M.2 and PCIe, and they're not sharing PCIe lanes. Yes, this is when this new Intel motherboard design really shines.

    ESXi 6.0 Hypervisor actually only requires 1GB.
    Read more about the motherboard logic design starting at page 11 of the Instruction Manual.

    10_drives_in_there

    If you consider the USB flash drive running ESXi as a drive, that's the 9th drive, and if you consider iSCSI a drive, well, that'd be #10, but that's not inside the box.

    It's so nice that the Xeon D-1540 SoC (System on a Chip) platform eliminates the M.2 concerns that this excellent PCWorld article mentions:

    Yeah, written back in April 2015, my how times have changed since then!

    Leave your comments/thoughts below.

    NVMeStack

    Source: Intel


    See also at TinkerTry

    Everything-laid-out-for-SuperServer-experiments
    As I was experimenting with various drive configurations, a couple of months ago.

    See also