LSI 9265-8i CacheCade Pro 2.0 is finally using my Samsung 830 256GB SSD for RAID5 read and write caching, boosting performance

Posted by Paul Braren on Jan 6 2013 in
  • ESXi
  • Reviews
  • Storage
  • I'm the guy that likes to abuse my RAID arrays, described here and here. So this success with CacheCade Pro 2.0 is that much sweeter, even if it took about a year and a half to fully bake.

    This vZilla project, which inspired me to start this web site on June 1 2011 in the first place, was in jeopardy of having a failed storage strategy. Without CCPro2 for SSD caching of my 5.6TB RAID5 array, I could obtain similar speeds at a much lower price than my LSI 9265-8i. But victory at last achieved last night, with initial tests indicating a favorable outcome.

    Stay with me here to the end of this saga, where you'll see some initial benchmark tests, and rather impressive results!

    Timeline, June 2011 to January 2013:

    I've had the LSI 9265-8i RAID controller since the summer of 2011, with the promise of having excellent read and write speeds for my RAID5 array, explained here:

    Good RAID Controllers with SSD caching and ESX support
    Aug 13, 2011 07:14 pm

    I soon decided to return my 9260-8i, going instead with the RAID5 CacheCade Pro 2.0 capable 9265-8i. Why? Only that model supported RAID5 read+write caching:

    Z68 Sandybridge Motherboard VT-d Test Matrix: Which Mobo/CPU combo works with VMware ESXi 4.1U1 VMDirectPath feature?
    Jul 14, 2011 06:56 pm

    By December, I had fully baked my storage strategy, and published it at:

    The reasoning behind vZilla’s storage configuration
    Dec 05, 2011 09:27 pm

    Basically, I'd format the entire internal RAID5 array as one big 5.6TB VMFS storage device in ESXi 5. I'd then put my VMs on that array, which would hopefully perform quite well, especially upon performing procedures the 2nd and 3rd time (for caching effects to start helping).

    By May, the prequisite hardware key and firmware to support CacheCade Pro 2.0 finally arrive:

    LSI CacheCade Pro 2.0 / FastPath FAQ has arrived, and so have the 30 day trial keys!
    May 02, 2012 10:59 pm

    But then I had a nasty scrape with losing my array with an early firmware, and a entirely unsupported OCZ Vertex 4 (so this "loss" was entirely my own fault):

    Playing with LSI CacheCade Pro 2.0: The OCZ Vertex 4 256GB SSD (VTX4-25SAT3-256G) (Caution!)
    May 03, 2012 01:33 am

    The actual RAID array drop can be seen in this newly published video, which is just a first-stab attempt at establishing whether my read and write speeds are now in the right ball-park:

    Only non-critical VMs that were left running at the time of the failure were affected, all other data on the array was unaffected by this incident. I also need to emphasize that the OCZ Vertex is still not on the CC Pro SSD compatibility list from Dec 2012 found on the 9265-8i Resources tab, with the details on pages 26 to 29:

    Interoperability Report for MegaRAID Value and Feature 6Gb-s SAS Controllers, Dec 13 2013

    Worse yet, I had a problem with later firmwares and tests with my Samsung 830 256GB SSD.  I couldn't enable CacheCade Pro 2.0 for reads and writes. Which defeated the whole purpose of having this card. The exact issue can be seen in this video:

    So, working with LSI Technical Support for months (and sharing the above videos with LSI), we decided to try to get to the bottom of why I couldn't enable read and write caching on my particular configuration by simply replacing my LSI00290 CacheCade Pro 2.0 hardware key:


    I removed the old key, which made my array Foreign, and not import-able. In other words, I should have done the wise thing, which would have been to disable CacheCade Pro 2.0 in the MegaRAID UI before shipping it in for exchange, oops!


    A week later, LSI had mercy on me, and kindly overnighted the new LSI00290. Thank you Sean and Jason! It arrived on January 4th, 2013. I then let it warm up to room temp, installed it, and tada, complete success, seen in the video below:

    Here's the before (no CacheCade) and after (CacheCade Pro 2.0 enabled) results, using ATTO Disk Benchmark with 2GB setting, and a thick provisioned 750GB C: drive residing on this ESXi 5 VMFS formatted RAID5 array:

    CacheCade Pro 2.0 disabled, 2GB Total Length, Queue Depth 4

    Drum roll please, here's the results on the 1st run (seen in the video) after enabling CacheCade Pro 2.0 in my MegaRAID VM:

    First run with CacheCade Pro 2.0 enabled, 2GB Total Length, Queue Depth 4

    "Real" benchmark tips here:

    LSI MegaRAID Controller Benchmark Tips
    November 6, 2012

    which I'm frankly not as interested in as real-world use and home-brew tests I'll be doing that are meaningful to me in my configuration. But I do take from it that I should look at the effects of moving the queue depth higher, with results seen below.

    Time will tell if the Samsung 256GB PM830 SSD works out. I'm using the latest firmware, which is still CXM03B1Q Release Date: 2012-01-1 from here:

    but it isn't an exact match for the model listed on page 28 of the Interoperability Report:

    Samsung 256GB PM830, MZ7PC256HAFU-000DA with firmware 1W1Q

    The Intel and other well-known-brand models they do list tend to be enterprise centric (pricey), naturally. It's not "normal" for home-build virtualization enthusiasts to use these for single-user storage labs.

    But the interoperability list has grown considerably since last summer, back when I met a LSI engineer at VMworld 2012 in San Francisco, discussing this whole matter at length.

    Let's hope gems like the beloved Samsung 840 Pro show up soon! Meanwhile, I may just wind up leaving my array as is, and move on to more important projects, like rebuilding my Windows Server 2012 Essentials system for daily PC backups, now that I know my array and storage strategy seems to be performing well, and is stable, so far...

    Lessons learned:

    1) Thin Provisioning
    Thin provisioning on the RAID5 array versus Thick Provisioning seemed to have no effect on performance of ATTO Disk Benchmark, at all.

    3 runs at 2GB Total Length, Queue Depth 4

    Thin provisioned results look the same as thick provisioned results (NTFS VM on the same RAID5 array  looks largely like the thick provisioned first test seen above), and you'll also note that multiple runs ran the same each time, so results were consistent, seen here:

    256MB Total Length, Queue Depth 4
    256MB Total Length, Queue Depth 10

    2) 2GB dataset required for ATTO Disk Benchmark on caching RAID controllers
    The default 256MB fits in the 1GB controller cache, greatly/artificially inflating benchmark results:

    256MB Total Length, Queue Depth 4

    3) moving from default Queue Dept of 4 to recommended 10 gives a considerable boost to the 16K and under results
    This hints that a heavy, mixed workload of a lot of VMs at once doing small IOs is where this controller will really shine, given it's the speed at 4K that really matters:

    2GB Total Length, Queue Depth 10

    Additional references:
    LSI MegaRAID CacheCade Pro 2.0 Software Evaluation, By Boston Limited
    LSI 9265-8i & VMware ESXi (health monitoring, MegaRAID UI in a VM, & CacheCade 2.0)

    Update Jan. 06, 2013:
    Looking backing, if I were buying a caching RAID controller today, things would likely only change a little.

    Adaptec MaxCache 3.0 has come along to compete with LSI CacheCade 2.0 Pro, for example.

    One model, the Adaptec 7805Q at around $1100:

    has this read and write caching, and supercapacitor based Zero-Maintenance Cache Protection.

    Such supercapacitor protection is on the LSI 9286CV-8eCC as well, called MegaRAID CacheVault Flash Cache Protection. But that unit still has the same performance as my 9265-8i, using the same LSISAS2208 Dual- core 6Gb/s ROC - x2 800MHz PowerPC Processors. The bus speed is boosted, with with my modest needs and overall throughput, I really doubt I'd notice any difference in my speeds.

    In the end, it appears I'm not missing out on some amazing speed boost at a lower price. So no regrets. And if I were starting all over today, I'd likely evaluate the Adaptec 7805Q against the 9286CV-8eCC first hand, using a bunch of benchmarks and real-world usage to determine my actual speeds seen during day-to-day home lab usage.

    I'll be watching:

    Anyone have any thoughts on the newer Adaptec RAID adapters?

    I don't have many spindles, but I'm burning more watts for my five 1.5TB drives than I'd like, so I find this interesting:

    See also the related graphic from