This article is rather unique, because it's entirely contributed by a site visitor. Allan happens to have started out with the Commodore VIC-20, as did I! That's right, I couldn't afford the Commodore 64, but this article isn't about me.
Allan's enthusiasm for sharing is incredible, and his story is a good read. I'm hoping we can chime in, and let Allan know what we think. He does pose some interesting perspectives and questions, and I am curious if you feel this system might also be a contender as a known-good ESXi 5.0 virtualization solution, like vZilla has turned out to be, for my particular needs, and the others who have cloned it. Drezilla's system is about 9 months newer, an X79 chipset Sandy Bridge-E system.
Keep in mind you don't even have to register to type your comments below, have at it!
Here we go, uncut, the URLs added to his parts list are the only tweaks that I made...
So here is my homebrew ESXi 5 story.
Starting on a Vic-20 with CompuServe in 1982, I bring 34 years of tech experience to the table. After retiring from the IT industry in 2005, I moved to Asia where I own and run a distribution company, managing our IT as a hobby.
5 years ago I first setup 3 virtual machines running on a hackintosh within VMWARE Fusion. The VMs were running 24/7 and after a year started to have issues. Studying some more I came to realize that I should not have production servers running 24/7 in fusion on a hackintosh! Duh...
With a very small budget, I patched together two P45 based motherboards with Xeon CPUs. I immediately put ESXi onto those two rigs, but there were serious issues with the Sata controllers because the whole system kept freezing up. With no budget to buy a RAID card, no time to tinker (monday work day was approaching) I opted for installing Windows 2008 and Vmware Server 2. After converting my files, this solution worked really well for nearly 3 years.
Breaking all the rules of messing with something that works, I decided to build a new server.
I was thinking about a few options for a new host.
Mac Mini maxed out with 16GB of ram and a thunderbolt Promise Pegasus.
Dell T710 bare bones, upgrading RAM and HDs down the road as I could afford them
Same as 3, but with HP
- Home made monster!
I should add at this point that I use these VMs in a production environment. There are 5 VMs serving 12 people.
- BES (soon to be retired, who uses a blackberry anymore!)
Running the numbers I came to realize that options 1,2, and 3 would end up costing me around 3000$ to walk in the door. Where option number 4 would only cost me about 1000$ to start...key word, to start.
Key Points of each solution:
- Low power consumption, small footprint, Limited CPU and Memory for VMs, limited performance with 1 NIC. The limited Memory and single NIC sorta killed this deal. Apple should really come out with a serious Apple Server version of the Mac Mini, imagine 3 NICs, Dual CPU, 64GB of ram, and Room for 4 Drives. That would have been a no brainer with it’s small footprint and eco-friendly power consumption.
I would like to add that I was very excited about the idea of running my VMs in a Promise Pegasus. I could setup the RAID flavor I like and then have my VMs running through thunderbolt! What a great great solution. But you should NOT buy the Promise Pegasus. Why? They ship it with desktop drives (Hitachi Deskstar 7K3000 2TB drives (HDS723020BLA642)) (see this article) not designed for the enterprise ie: Raid. So basically Promise is taking advantage of the situation (that being they have the most viable ThunderBolt solution on the market) and forcing you to buy drives for their RAID enclosure that you can’t even use for RAID purposes! Desktop drives on RAID! How about we just ask for an early death? On the other hand, if they sold that Pegasus without drives, I would be all over it for my Mac Desktop.. So please be careful and make sure you tell your friends and cousins, don't buy from Promise till they promise to delivery with a proper solution!
The Dell would have been amazing and so would have the HP. Both of them have hardware that is supported out of the box across the board and they would have lasted me 10 years. But they hog power, are loud, and maxing them out with memory and controllers would have run up a serious bill. I was pretty close to going this route... tough choice
- The home made beast idea had questionable support for RAID cards in the future, Wasn’t 100% sure ESXi would run smoothing, but otherwise looked like the winner on paper with an affordable cost, fun, and flexibility. I figured if ESXi didn’t comply, I could always setup Vmware Server 2 or XenDesktop. I knew I had options and it was affordable to get in. The risk was worth it...So winner winner chicken dinner!
So one shopping trip up at Pantip Plaza here in Bangkok later and I came home with the following goodies:
Shopping list... sorry, not a newegg shopper
Total out here in Thailand was 1,200$ USD.
Building out the rig:
The finished rig:
Basic findings with ESXi 5 U1 (not including advanced features like pass thru, because I have not learned how to utilize it or even check for it):
CPU - 100% ok
Memory - 100% ok
Expansion Slots - 100 ok
Storage - The 6 ports on the X79 are ok (2 6Gb/s and 4 3Gb/s), but the Marvell 9128 controller is not working and I could not find a driver for that chip for ESXi. On the other hand, the ASM1061 eSata ports were workin
g so I re-routed those ports back into the case to add 2 more Sata Ports (6Gb/s).
LAN - The on board 82579V is not working. The common work around to this was not working for U1, so I installed 3 Intel 82572EI based NICs
I can’t express how happy I am with how well this setup is working
It’s an awesome Host, but the storage situation is NOT so awesome. Running each VM on its own Sata drive, that VM backing itself each night to a second HD is not optimal... so it’s time to upgrade that area and finally do this the right way.
Legacy drives for storage. The only thing I opted not to invest in at this point was hard drives. I managed to move all VMs from the two old hosts to the new Host and bring over the old HDs one at at time. This is where I am in my decision making process. Over the next 3 to 6 months I would like to upgrade the drives to SSDs with or without a RAID card. I’m not sure if the RAID card is really necessary with SSDs. This is where I would like to open up a discussion and find out the best possible solution. Reading further into your own opinions, I can see that you have a RAID card with Mechanical drives and are going for the SSD caching feature to get SSD Raid performance. This might be an option for me as well... it could lower my investment cost and give me more storage space. My biggest beef with mechanical drives is that RE4’s are Sata II. Did anyone let Western Digital know that they need to ummm upgrade their enterprise drives? The only part that I am set on at the moment is the Intel 520 SSD, it seems to be the best and most stable SSD on the market at the moment.
I wouldn’t even know where to begin with this and don’t really see the benefit at the moment. Maybe with some NIC cards down the road or a video card to get better performance out of a desktop VM.