Practical ways to deal with VMware ESXi 5.0′s 2TB virtual disk size limitation

Posted by Paul Braren on Dec 16 2011 in
  • HomeServer
  • Storage
  • Virtualization
  • Recently ruderthanyou posted some intriguing comments:

    on The reasoning behind vZilla’s storage configuration yesterday:

    I have a ESXi 5 whitebox build very similar to yours. I've recently run into this 2TB limit for vmdk's and this post interested me greatly. I have 5x3TB Hitachi Coolspins in an 11TB array. I wanted to present a 8TB vmdk to my sbs2011e guest.. Took me a few times reading this post to understand, but it doesn't appear that you're presenting any of the 7TB vmfs as a >2TB vmdk to your guest OS. You're bypassing the limit by using the Mediasonic on the passthru USB 3.0. I wanted to verify that I was understanding you correctly.

    As you mention, the only way to get a >2TB lun presented to a guest OS is to use something like RDM, iSCSI or FC. I've been reading up on RDM, and before I go down a potential rat hole, I wanted to see if you where able, using your provided link, configure a local >2TB RDM volume and present it to your guest OS? Seems like a lot of people have problems here and it also doesn't appear to be a supported config by vmware.

    And before I forget, I want to mention your website has been great. You have saved me countless hours of experimentation and most likely my marriage with my little endeavor. lol.

    I'm glad I potentially helped your marriage, who'd think storage talk could possibly do such a thing ;-).

    I'll also say marriage advice isn't exactly my bag however, so let me instead stick with trying to explain the 2TB virtual disk size limitation issue, and how I dealt with it in my lab. In all honesty, I'm no expert on this either, I really only recently tested 2 possible solutions, and am brave (or foolish) enough to blog about it. Technically, it's actually 2TB minus 512 bytes, seen here in VMware's Configuration maximums. And this isn't a discussion about booting Windows from a C: drive bigger than 2TB, that's a whole different matter. I'm talking about data drives here, not boot drives. See also Block size limitations of a VMFS datastore.

    Hopefully a picture really is worth a thousand words. Please leave your comments below, to let us all know if you agree!

    beyond2tbvmmaxdisksize

    Soon, I'd like to do a deeper drive article on RDM mappings, similar to the recent post about configuring USB 3.0 passthrough at TinkerTry.com/usb3passthru. I'll try to cover the more-advanced and totally unsupported technique of actually configuring RDMs (Raw Device Mappings), that magically allow your existing >2TB SATA drives and all the data on them to instantly appear to ESXi 5.0 as a SAN volume, ready to be RDM mapped to a particular VM, NTFS partitions intact. The screenshot below gives you a preview, showing a rehearsal of what I typed (well, pasted) into PuTTY, to test an RDM successfully.

    Can vCenter's Storage Migration actually work with such RDMs? I'd say the outlook isn't great, but that's ok, this VMware KB article  is still an interesting read, and I'll probably give it a shot someday anyway. Once everything is migrated off my older server to my new server, and backed up, of course.

    12-15-2011-7-04-39-PM

    Dec 17 2011 Update:  I got one RDM working fine, as seen by the PuTTY commands above, and the evidence in the "rehearsal" screenshot below, more info to follow in future posts.

    MappedRawLUN

    Jan 07 2012 Update:
    It works! VMware Converter standalone was able to virtualize the RDM based Windows Home Server, the only significant catch was that the VM had be turned off.