The reasoning behind vZilla's use of the LSI 9265-8i RAID Controller (hint: SSD for RAID5 read-and-write caching)
You may recall my article The reasoning behind vZilla’s storage configuration, which dove into why I settled on my particular storage configuration given the number of drives I had on hand. Watt burn consideration played a large part of that decision, since this system is left running 24x7. I'm also spoiled forever with SSD speeds, and I've seen what RST SSD caching can do, so I also wanted my RAID5 array to behave more like an SSD. But full ESXi 5.0 support requirement was critical, so this admittedly narrowed down the field of options.
You may also recall my decision to move away from the ESX whitebox favorite, the affordable Dell PERC 5, which is based on an older LSI controller. Going with a several year old solution on a brand new system, and covering up PCI pins with electrical tape just to allow any Z68 mobo I tried to boot, really bothered me.
It was time to build for the long-haul, with an eye toward to a much more modern RAID controller, with:
- Dynamic volume expansion (expand RAID volume size while keeping all the data)
- SATA3 support (good for latest wave of SSDs that need it, as demonstrated here)
- overall speed, stability, and support (LSI has a long history of VMware support)
- CacheCade 2.0 read/write caching of RAID5 (unique in the industry), a promised 1Q2012 feature
- Expandability, just in case I ever wind up with more than 8 drives, I can upgrade for $278 USD with the compatible Intel RAID Twenty-four port Expander Card RES2SV240
So I briefly tested an Adaptec 6805Q with maxCache read caching, long before this 6805 versus 9265 article came out. I also tested the LSI 9260-8i and the 9265-8i, and settled on the LSI 9265-8i RAID adapter. Very painful and time consuming to test, fraught with missteps and learning. One example of a learning experience was my use ATTO Disk Bench at defaults: I forgot to crank up the dataset to 2TB, to avoid falsely-fast cache-only results (the 9265 has 1GB of cache).
I also found that publishing any kind of benchmarks tends to bring on endless, heated flame wars. So, given I don't have a lot of enterprise level 7200rpm drives, or SSDs, on hand, nor the time to carefully document every single combination of cache settings, this all made the proposition of benchmark testing less appealing. So I'm purposefully chosen to lay quiet on the final numbers I'm getting, not until I get CacheCade 2.0 anyway.
I had a bit of worry, when I went to install the LSI 9265-8i on the initial August 2011 ESXi 5.0 release, and found drive support wasn't built in, despite being listed on the VMware HCL. I was quite relieved when I figured it out, and wrote up an article on the somewhat complex process. That article quickly became one of my top 5 read stories on this blog, ever, since this info was non-existent elsewhere:
How to make ESXi 5.0 recognize an LSI 9265-8i RAID controller
Then, with the release of ESXi 5.0 Update Rollup 1, with LSI 9265-8i driver support baked right in, installation became easy, as seen in the video I created and shared this week, seen at Build your own vSphere 5 datacenter, using a Z68 motherboard.
To be fair, it'd be nice if LSI actually supported MSM (MegaRAID Storage Manager) in a VM someday soon, especially since MSM used to sort-of-work in ESX/ESXi 4.1. But those aren't showstoppers, I'm able to get to the WebBIOS to configure the RAID initially and add SSDs as needed, and that suffices. And I can always dual-boot to native Windows and run MSM from there, in a pinch.
This saga all leads to today, when I read this wonderful, challenging comment/question that came in overnight, from Jay Oliphant:
https://tinkertry.com/vzillacompleted/#comment-419357216
First of all, I have been enjoying following your blog about vZilla, Regarding the LSI RAID controller and SSD caching - I'm assuming this is all handled by the RAID controller itself. Does it do this all at the block level, and move more frequently accessed blocked to the SSD? If the SSD fails, do you lose the whole raid? Or merely just the extra "cached" performance? As far as ESXi is concerned, I assume it can only "see" a single volume presented to it and is unaware of the SSD caching happening in the background. I use ESXi 5 @ home as well, and am looking into potentially speeding up the performance of my home server, as it is currently only a Dell PERC controller with a regular RAID 5 array.
There's a lot in that excellent commentary, let me try to tackle this, one or two sentences at a time:
Regarding the LSI RAID controller and SSD caching - I'm assuming this is all handled by the RAID controller itself.
Yes, ESXi 5.0 doesn't need anything beyond built-in drivers in the hypervisor to see the volume on the array, it doesn't know or care if it's cached.
FYI, I did buy the Battery Backup Option, as listed on TinkerTry.com/vzilla and seen wire-tied externally (where it's kept cool) at TinkerTry.com/vzillacompleted.
Does it do this all at the block level, and move more frequently accessed blocked to the SSD?
I don't know for sure, I'm hoping to interview somebody from LSI to find out! I would guess that you're right, I've read nothing to indicate that it "knows" filesystems, so it seems it must be block level.
If the SSD fails, do you lose the whole raid? Or merely just the extra "cached" performance?
Just the cache goes away, I tested removal of the SSD (physical and logical), using my older 96GB SATA2 Kingston SSD, and the data on the RAID0, RAID50, and RAID5 arrays that I tested remained intact.
As far as ESXi is concerned, I assume it can only "see" a single volume presented to it and is unaware of the SSD caching happening in the background.
Yes, it's totally unaware, and that's the beauty of it, no special software-based support is needed, in contrast to the much-more-affordable but software-heavy Intel RST technology (that does work well for Windows, as I demonstrated here). This means this LSI RAID should also work nicely for native Hyper-V installs too, although that's not my priority.
So, to sum things up, I've pinned all my hopes on the CacheCade 2.0's read/write caching that the 9265-8i will offer 1Q2012, and will not offer for the less pricey 9260 series. I'm sure hoping LSI makes good on that promise, and that the speed is good. I briefly tried the CacheCade 1.0 hardware dongle, but with older SATA2 SSDs, the speed boost wasn't worth the price, so I returned it, saving up for the CacheCade 2.0 FastPath bundle instead. I'm hoping the cost will be under $300 or so, based on similar products already listed.
FYI, I discussed CacheCade 2.0 in detail, way back in September 2011 here:
TinkerTry.com/goodraidcontrollerswithssdcachingandesxsupport
If the CacheCade 2.0 never comes out for the 9265-8i, I'll certainly look like quite the chump, and will have overspent considerably. But my speed with current my no-SSD-cache-yet RAID array is sufficient.
You could say that I'm already putting my eggs into this LSI RAID basket, trusting some of my VMs already to this array. But I always keep backups. And I'll certainly have full backups, before I finally get to try CacheCade 2.0.
I'm intentionally holding off on buying the SSD, until CacheCade 2.0 is actually available, since prices continue to fall, and SSD controller chips and firmware levels rapidly mature. My hope is that I'll be able to get a best-in-class SATA3 drive, with a budget of roughly $200. Ideally, I want an SSD with automatic TRIM-like BCG housekeeping, since RAID attached SSDs don't support TRIM commands, and I want the cache to maintain performance over time.
A RAID adapter with both proper UEFI support (instead of a slow BIOS routine), and a battery-less (supercapacitor/ultracapacitor) design would have been preferred, but I still don't see such adapters on the market, and it'll probably be quite a while before those RAID adapters are mature, with full ESXi support, time I didn't have for this project.
At month 7 and counting, I feel reasonably confident I'm finally nearing the end-zone. A perfect touchdown would be for vZilla to soon have a best-in-class RAID5 array, with SSD-like speeds, or at least speeds that are far greater than any similarly-priced NAS storage vendor.
A little more time, and testing, will help me finish this tale. Stay tuned!