32GB of RAM is the sweet spot for VMware ESXi 5.0 Free Hypervisor

Posted by Paul Braren on Nov 16 2011 (updated on Feb 28 2012) in
  • BIOS-UEFI
  • CPU
  • Efficiency
  • Motherboard
  • Reviews
  • Storage
  • Virtualization
  • FractalCaseAsrockz68-00.components

    Feb 28 2012 Update:
    Great news, with some patience and careful shopping, ~$200 USB can now get you 32GB of RAM that works great, given 32GB is what the free Hypervisor 5.0 supports maximum anyway, good deal!
    TinkerTry.com/32gb-memory-on-asrock-fatal1ty-z68-professional-gen3-motherboard

    I cannot claim TinkerTry.com/vzilla has been a particularly affordable proposition for folks, where I lay out all the parts and the total cost, somewhere between $2000 and $3000, without drives.  It really depends upon the speed and size of the shared storage you desire. RAID with proper 3rd Gen SSD support is nearly half of that cost.

    But when you begin look at the cost of putting together a Server and a separate slower (or far costlier) NAS solution atached via Gigabit, and this all-in-one approach starts to look more attractive, for an always-on system like this. My storage throughput to my local storage subsystem will likely far exceed any Synology/Netgear/Drobo type of solution, at the cost of some additional complexity and set up time. The speed will really shine once CacheCade 2.0 read/write caching arrives 1Q2012 for my fast LSI 9265's RAID5, see TinkerTry.com/lsi9265smackdown. And the array's data is totally "auto-bricked" if removal is unauthorized, given the LSI's ability to encrypt all drives, without any impact on speed. Good things indeed.

    But one of the things that thing that always really bothered me about this build project was memory. Virtualization loves memory. And the Z68 chipset only supports 4 DIMM slots, which means about $140 gets you 16GB total. Contrast that with server class Supermicro motherboards, which generally have 8 or more DIMM slots, and are generally more RAID friendly. But they also tend to cost quite a bit more than Z68 motherboards, and require the Xeon line of more expensive CPUs as well, so the overall cost climbs quickly.

    While it may be possible that features like vSphere Host Cache Configuration could alleviate the pain of memory pressure situations somewhat, I'd rather just have 32GB.

    Read onward for the good news...

    This summer, shortly after releasing ESXi 5.0, VMware also altered their licensing for the free ESXi 5.0 Hypervisor, allowing for 32GB of RAM. Phew, this really helps the "home lab" whitebox user. Breaking past the 2TB VMFS partitioning limitations was another big step forward.  And finally, ESXi 5.0 can apparently run nested Microsoft Hyper-V as well.

    So, here's the significant hurtles I have to clear to get vZilla running, and the status of each:

    a) VMDirectPath for LSI 9265-8I RAID

    VMDirectPath also known as passthru is related to the RAID controller's BIOS configuration versus GUI configuration techniques, let me explain. RAID adapters from LSI and Adaptec take up a lot of BIOS RAM, and the 8GB of RAM on Z68 desktop motherboards prevents the utilities from starting (such as LSI Ctrl+H for WebBIOS). No major RAID cards seem to be shipping UEFI models yet, nevermind models supporting the new PCI 3.0. So we're stuck with trying to free up BIOS RAM, which sometimes fails to work, as seen in my video. So instead, I'll just continue to use the method to get the LSI MegaRAID health monitored explained here, and I'll use the GUI from inside a virtual machine that is resident on a drive that's attached to one the motherboard's SATA ports, to use for RAID controller storage configuration, or recovery should something go wrong, explained in more detail below. Yes, setting up the passthru and rebooting ESXi to enable it temporarily just for RAID reconfiguration is a little less elegant, but do-able, and I've already tested this. Finally, today, I learned from excellent LSI support that CIM providers installed in the ESXi environment will tell you RAID and drive and cache battery health for example, seen here at TinkerTry.com/lsi92658iesxi5, but won't let you configure the RAID or run MegaRAID, unless you enable VMDirectPath. Not a showstopper though, as long as I have a tested process for a "bad day" with drive failures, or adding drives later (online RAID expansion, no data loss). I'll need to diagram this vZilla build when it's done, to easily paint a picture of what's running where.

    Here's a possible workaround, allowing you to use MegaRAID in a pinch in an ESXi 5 environment:

    Create RAID array:

    1. install Windows 7 x64 in a 1GB of RAM virtual machine that lives on a single 1TB drive that’s attached to the motherboards SATA port.
    2. enable VMDirectPath for the 9265-8i in ESXi 5.0, reboot
    3. assign the 9265-8i to that RAID VM
    4. install 9265 drivers and MegaRAID in that VM, then configure the RAID array
    5. unassign the 9265-8i from that RAID VM
    6. disable the VMDirectPath in ESXi 5.0, reboot
    7. format the new RAID array in ESXi 5.0 GUI as VMFS-5, and leave it that way

    Fix or change RAID array:

    If I have a RAID problem (failed drive, etc, where I feel I really need to see MegaRAID, or when I wish to enable CacheCade 2.0 and FastPath 1Q2012):

    1. enable VMDirectPath for the 9265-8i in ESXi 5.0, reboot
    2. assign the 9265-8i to that RAID VM
    3. boot the RAID VM, avoid reformatting the VMFS-5 as NTFS, and avoid writing disk signature to that VMFS-5 accidentally, I don’t believe that’ll be a problem, I need to test to be absolutely sure
    4. do my MegaRAID operations (replace drive, add license keys, etc)
    5. unassign the 9265-8i from that RAID VM
    6. disable the VMDirectPath in ESXi 5.0, reboot, resuming normal operations

    You will not be able to really watch progress of rebuild operations though, but I will see through CIM health when the rebuild is complete and all drives are healthy again.

    Another option, if you're willing to learn it, may be to use the LSI MegaCLI, discussed here.

    b) VMDirectPath for USB 3.0 support

    Note, both the CPU and Motherboard must support VT-d, which I found only on certain ASRock and MSI Z68 boards.

    At TinkerTry.com/vmdirectpath you'll see that I learned that trying to get motherboard based SATA, video, or USB 3 passed thru to a particular VM was a loosing proposition, instead focusing on adding USB 3.0 adapter, for all my USB 2.0 and USB 3.0 on Windows Home Server requirements, which include CyberPower UPS monitoring, Energy Monitoring, and rapid data copy to USB 3.0 RAID enclosure.  I will be video blogging about my ongoing VMDirectPath tests very soon, right here at TinkerTry.

    c) 16GB versus 32GB of RAM

    My project was mostly held back by having only 16GB of RAM, allowing me to comfortably juggle only 4-5 operating systems comfortably, but not also test Hyper-V.  So I got to thinking about the Sandybridge-E (X79) releases this week that allow motherboards with 8 DIMM slots, making the proposition of being able to afford 32GB of RAM seemingly better, at first glance anyway, such as this MSI X79A-GD65 motherboard for roughly $330.

    Here's some of the gotchas I quickly discovered:

    • the CPUs are roughly $1000 for the Core i7-3960X or roughly $600 for the Core i7 3930K (with nothing like the  Core i7 2600 versus 2600K choice I had to make, they both do VT-d)
    • the new CPUs burn 130 watts instead of the Core i7 2600/2600K series 95 watts (adding up to hundreds more in electricity over my intended 4 year usage)
    • the optimized for X79 32GB memory kit from Kingston cost is probably around $1000 (not yet announced here, but see $1000 estimate for CORSAIR's similar kit
    • the considerable time and $ I've already sunk into getting this to a known-good state (after all, tinkertry.com is also known as knowngoodsolutions.com).

    So, instead, I'm relieved and happy to keep my more efficient and certainly more affordable vZilla rig exactly as-is, with one exception.  I plan to just replacing my 16GB of RAM I have in there now (re-using on other systems), and going with 32GB for roughly $500, see  ASRock's stated memory test matrix here, which lists only the ADATA XPG Gaming Series 16GB (2 x 8GB).  Two of those 16GB kits total about $500 dollars total for 32GB total.

    And if I wait a little longer, perhaps that price will go down, unless floods or fires hit the DIMM factories of course.  Ideally, I'd like a little higher memory speed as well, since my motherboard wattburn doesn't seem to go up if I manually clock the memory to 1600.  So perhaps I wait until somebody states something like this CORSAIR Vengeance 32GB (4 x 8GB) kit actually works with my ASRock Fatal1ty Z68 Professional Gen3 motherboard, doubt I'd notice any difference in speed though, so mostly shopping based on pricing/rebates/specials at this point.  Then there's the G.Skill RipjawsX F3-12800CL10Q-32GBXL site where my exact motherboard is listed, priced around $500 at newegg.com.  However, there's no reviews yet, and no other sites seem to have any reviews yet either, so this feels very early to seek out 32GB memory kits (or individual or paired kits of 8GB DDR3 DIMMs).

    Questions/Thoughts?  Please put your comments below!

    (to make it easy, login isn't even required, although signing up for Disqus is handy, so you can easily post at many sites that also use Disqus).