The reasoning behind vZilla's storage configuration

Posted by Paul Braren on Dec 6 2011 (updated on Apr 21 2014) in
  • Storage
  • mediasonicdrivesbusy
    MediaSonic external RAID showing WHSv1 virtualization underway, front door removed

    Your comments below the article are are always appreciated, not just by me, but by all TinkerTry visitors!

    So, here is my story, in timeline format, where my reasoning will hopefully become clear.

    June 2007: Creation of video editing system, 3TB of valuable video files on Intel ICH6 RAID0 drive D:, that need to be backed up reliably, somewhere, somehow!

    December 2007: Creation of Windows Home Server v1 (to replace manual Ghost backups)

    I found I now had the need to make reliable and compact backups of my huge video editing machine.  Yep, over 3TB of video to deal with and back up. WHS (Windows Home Server) fits the bill nicely, allowing single instance storage, and greatly decreasing the backup server storage required to handle the backups of roughly 8 systems every day. Many of these systems contain some of the same large files. WHS proved to be a much better than traditional imaging or backup software,

    WHSv1 also handled my 4TB PC Backups folder just fine, with Drive Extender breaking past the 2TB barrier without worries or issues.

    December 2010: WHSv1 begins to miss some backups, or can no longer reliably complete  all backups between 1am and 6am. But bare-metal restores continue to save me from PC disasters roughly every-other-month.

    April 2011: Migration to WHS2011 plans and build begin

    I began to research and test new Z68 motherboards with my new Core i7 2600 CPU, and it seemed to work well for virtualization, at a much lower price point than power-hungry server-class systems. This was partly done as a learning exercise, seeing which motherboard vendor's form, function, and UEFI BIOS I liked the best, and partly just because nobody else seemed to also be talking about it, so I was curious if it could work. Of particular interest to me was getting USB 3.0 passthrough (or added eSATA ports) working as well.

    But my pressing need was not to satisfy my curiousity, it was my need to consolidate several systems into just one efficient system left running 24x7, allowing more reliable daily backups than my very old Pentium D WHSv1 system could handle.

    August 2011: VMware ESXi 5.0 released, 2TB VMFS filesystem limit lifted.

    November 2011: The light bulbs go off, the Eureka moment!

    Figuring out the best way for me to move to the new WHS2011 was not simple. I needed to figure out how to best handle storage, for my needs, budget, and stringent speed requirements, as SSDs began to spoil me forever, in a good way. WHS2011 was finally available and fairly mature, but handling anything beyond 2TB was a bit trickier. Not because I need to backup systems with >2TB boot drives, I don't, and it still handles backing up GPT volumes >2TB just fines. The issue is that VMware ESXi 5.0 has a limitation, which will host my WHS2011 system, where anything beyond the 2TB virtual disk size limit is trickier, but at least the 7TB RAID array I planned to build could now be one big VMFS-formatted lump.

    Finally, I realized I didn't need to resort to iSCSI or RDMs trickery for getting beyond 2TB, which could increase complexity, cost, and configuration time. Having played with OpenFiler and VSA, this approach just was not appealing to me, and speed and latency were disappointing, although I did learn some good stuff (and updated my LeftHand Networks iSCSI SAN skills).

    What if I merely use USB 3.0 configured for passthru to the WHS2011 VM, and see if the speed suffices for my PC Backups needs?  In other words, just the backup destination is moved from a 2TB maximum VM to on an affordable and reasonably fast external 4 drive enclosure (with embedded RAID5 capability)? Yes, was the answer. I had already become quite familiar with the Mediasonic that did exactly that, allowing me 5.5TB of external storage at RAID5 levels, with drives I already owned, and a second enclosure that can take the 4 drives and import the RAID settings, avoiding a single point of failure for me years down the line. And because I have more than 10 PCs to backup regularly on my familycloud, and I now have enough storage, I can use this beefier alternative for 25 PC backups, called Windows Storage Server 2008 R2 Essentials.

    December 2011: Test matrix finally fully baked. The 3 roughly equal RAID5 lumps of storage I came up with allow me to escape my 1TB drives (that are too power thirsty) for good. Awesome!

    Check this out, it's all coming together, all the drives I have, many acquired by gutting my old WHSv1 box, give me this layout, nice! It's almost like I planned it:

    • Internal 5.6TB RAID5 -  5x1.5TB drives, LSI 9265-8i RAID controller
      Superfast storage for VMs and data, with SSD cache soon
    • External 5.5TB RAID5 -  4x2TB drives, external 1 of 2 MediaSonic enclosures, on USB 3.0
      (for encrypted backups)
    • External 5.5TB RAID5 -  4x2TB drives, external 2 of 2 MediaSonic enclosures, on USB 3.0
      (for encrypted backups kept offsite)

    The process I'm using to virtualize my old WHSv1 into my new vZilla build is underway and working well, looking like a coldclone of 5TB of data will take about 30 hours, at a rate averaging 46Mbps, which is roughly 162GB per hour, not bad for a one time migration, which is largely hands-off once I kicked it off, and will result in a VM that behaves exactly as the original physical system did, only faster. This gives me time to move the numerous USB devices and services over to the new server virtual machines, and follows the always-a-good-idea of having all data in at least 2 places at all times.

    Many more details to come, including drive types I used, RAID settings, and throughput tests, stay tuned!

    12-9-2011 Update: whether VMware Converter will work with WHSv1 is up for debate (I'm having some issues with newer version), but I may get the boot from CD method to work

    12-9-2011 Update: phrasing of this article adjusted, for easier reading, and RAID decision spreadsheet screenshot, and better in progress screenshot, inserted below:

    12-12-2011 Update: VMware Converter 5.0 issue resolved, but speed slow (despite turning off encryption, will blog about that later), taking 3 days to move 5TB, but it should work out fine, screenshot added below

    VMware Converter 3 Cold Clone boot CD in progress (but ultimately failed to produce bootable VM
    VMware Converter 5.0 doing a live virtualization of WHSv1