CPU, Storage, and Virtualization Glossary

Posted by Paul Braren on Jan 5 2012 (updated on Jan 5 2015) in
  • CPU
  • Storage
  • Virtualization
  • VT-c

    The focus of this glossary is virtualization, storage, motherboard, and network technologies. These companies have quite an alphabet soup, such as VT-c described above, and I find it can be pretty tough to keep straight. Hope this helps! Each section is cut and pasted from the listed source.


    Hybrid RAID  Adaptec controllers write to both HDD and SSD and read from SSD 100% of the time resulting in maximum performance.

    Hybrid RAID arrays of Solid State Drive (SSD) and Hard Disk Drive (HDD) offer tremendous performance gains over standard HDD RAID arrays by performing read operations from the faster SSD and write operations on both the SSD and HDD. The result is a higher number of read operations per second with no degradation of write I/O performance, and complete transparency to the operating system and all running applications.

    maxCache IT departments can get the most out of SSDs by creating High-Performance Hybrid Arrays made up of SSDs and SATA/SAS HDDs to convert industry-standard servers into cost-effective, high-performance, scale-out application storage appliances.


    RST: Rapid Storage Technology (Intel® RST) 10.5  With additional hard drives added, provides quicker access to digital photo, video and data files with RAID 0, 5, and 10, and greater data protection against a hard disk drive failure with RAID 1, 5, and 10. Support for greater than 2.2 TB HDD RAID configurations Support for external SATA (eSATA) enables the full SATA interface speed outside the chassis, up to 3 Gb/s.

    SRT:  Smart Response Technology  Implements storage I/O caching to provide users with faster response times for things like system boot and application startup.  From anandtech.com "Both reads and writes are cached with SRT enabled. Intel allows two modes of write caching: enhanced and maximized. Enhanced mode makes the SSD cache behave as a write through cache, where every write must hit both the SSD cache and hard drive before moving on. Whereas in maximized mode the SSD cache behaves more like a write back cache, where writes hit the SSD and are eventually written back to the hard drive but not immediately."

    Vt-d (Intel® Virtualization Technology for Directed I/O) The relationship between VT and VT-d is that the former is an "umbrella" term referring to all Intel virtualization technologies and the latter is a particular solution within a suite of solutions under this umbrella.

    The overall concept behind VT is hardware support for isolating and restricting device accesses to the owner of the partition managing the device.

    A VMM may support various models for I/O virtualization, including emulating the device API, assigning physical I/O devices to VMs, or permitting I/O device sharing in various manners. The key problem is how to isolate device access so that one resource cannot access a device being managed by another resource.

    More info in "A Superior Hardware Platform for Server Virtualization" article here

    Intel VT-d speeds data movement and eliminates much of the  performance overhead by reducing the need for VMM involvement  in managing I/O traffic.  It accomplishes this by enabling the VMM  to securely assign specific I/O devices to specific guest OSs. Each  device is given a dedicated area in system memory that can be  accessed only by the device and by its assigned guest OS. Once the initial assignments are made, data can travel directly  between a guest OS and its assigned devices. I/O traffic flows  more quickly and the reduced VMM activity decreases the load on  the server processors. Security and availability are also improved,  since I/O data intended for a specific device or guest OS cannot be  accessed by any other hardware or guest software component.

    VT-c (Virtualization Technology for Connectivity) Better Virtualization Support in Intel® I/O DevicesAs businesses deploy more and more applications in virtualized  environments, and as they take advantage of live migration to save power or boost availability, the demands on virtualized I/O increase significantly. Intel VT-c optimizes the network for virtualization by  integrating extensive hardware assists into the I/O devices that are used to connect your servers to your data center network, storage  infrastructure and other external devices. In essence, this collection  of technologies functions much like a post office that sorts anenormous variety of incoming letters, packages and envelopes and delivers them to their respective destinations. By performing these functions in dedicated network silicon, Intel VT-c speeds delivery and reduces the load on the VMM and server processors. Intel VT-c includes two key technologies, which are now supported  in all Intel® 10 Gigabit Server Adapters and selected Intel® Gigabit Server Adapters. In a traditional server virtualization environment, the VMM has to sort and deliver every individual data packet to its assigned virtual machine. This can consume a lot of processor cycles. With VMDq, this sorting function is performed by dedicated hardware in Intel Server Adapters. All the VMM has to do is route the presorted packet groups to the appropriate guest OSs. I/O latency is reduced and the processor has more cycles available for business applications. Intel VT-c can more than double I/O throughput and achieve near-native thoughput for virtualized applications, so more applications can be consolidated per server with fewer I/O bottlenecks.

    VMDc (Virtual Machine Direct Connect) Allows virtual machines to access network I/O hardware directly, using the PCI-SIG Single Root I/O Virtualization (SR-IOV) standard, helping to improve virtualized performance dramatically. As discussed in the previous section, Intel VT-d enables a direct communication channel between a guest OS and an I/O port on the device. SR-IOV extends this by enabling multiple direct communication channels for each I/O port on the device. For example, each of ten guest OSs could be assigned a protected, dedicated 1 GB/s link to the corporate network through a single port on the Intel® 10 Gigabit Server Adapter. These direct com-munication links bypass the VMM switch to enable faster I/O performance with less load on the server processors.

    Virtual Machine Device queues (VMDq)  VMDq reduces I/O overhead on the hypervisor in a virtualized server by performing data sorting and coalescing in the network silicon. VMDq technology makes use of multiple queues in the network controller. As data packets enter the network adapter, they are sorted, and packets traveling to the same destination (or virtual machine) get grouped together in a single queue. The packets are then sent to the hypervisor, which directs them to their respective virtual machines. Relieving the hypervisor of packet filtering and sorting improves overall CPU usage and throughput levels.


    CacheVault Technology  CacheVault technology provides RAID controller cache protection using NAND flash memory and a supercapacitor. In the event of a power or server failure, CacheVault technology automatically transfers cached data from the DRAM cache to flash. Once power is restored, the data in the NAND flash is copied back into cache until it can be flushed to the disk drives. This technology eliminates the need for Lithium-ION battery backups that are traditionally used to protect cache memory on PCI RAID controllers

    SSD Guard™ SSDs are known for their reliability and performance. The LSI SSD Guard technology, that is unique to MegaRAID controllers, increases the reliability of SSDs by automatically copying data from a drive with potential to fail to a designated hot spare or newly inserted drive. A predictive failure event notification, or S.M.A.R.T command, automatically initiates this rebuild to preserve the data on an SSD whose health or performance falls below par. If a hot spare is not present or not assigned, MegaRAID Storage Manager (MSM) will recommend that the user insert a hot spare drive into an available slot.

    Because SSDs are very reliable, non-redundant RAID 0 configurations are much more common than in the past. SSD Guard technology offers added data protection for RAID 0 configurations by actively monitoring the status of the SSDs. SSD Guard, together with MegaRAID FastPath software, allows users to take full advantage of the reliability and performance attributes of SSDs.

    MegaRAID® CacheCade™  CacheCade software is an advanced software option for LSI MegaRAID 6Gb/s SATA+SAS controller cards that is designed to accelerate the performance of HDD arrays with only an incremental investment in SSD technology. The software enables SSDs to be configured as a secondary tier of cache to maximize transactional I/O performance for read-intensive application.

    MegaRAID® CacheCade Pro 2.0 Read and Write caching software for 6Gb/s MegaRAID® SATA+SAS controller cards Leverages SSDs in front of HDD volumes to create high-capacity, high-performance controller cache pools.

    LSI™ MegaRAID® FastPath™ Software An IO Accelerator for Solid State Drive Arrays. LSI MegaRAID FastPath software is a high-performance IO accelerator for Solid State Drive (SSD) arrays connected to a MegaRAID controller card. This advanced software is an optimized version of LSI MegaRAID technology that can dramatically boost storage subsystem and overall application performance — particularly those that demonstrate high random read/write operation workloads — when deployed with a 6Gb/s MegaRAID SATA+SAS controller connected to SSDs.


    VMDirectPath  VMDirectPath I/O device access enhances CPU efficiency in handling workloads that require constant and frequent access to I/O devices. It enables virtual machines to directly access underlying hardware devices. This will map a single HBA to a single VM and not allow sharing of the HBA by more than a single Virtual Machine. However, other virtualization features, such as VMotion, hardware independence and sharing of physical I/O devices will not be available to the virtual machines using VMDirectPath I/O.

    Paravirtualized SCSI VMware Paravirtualized SCSI (PVSCSI) is a special purpose driver for high-performance storage adapters that offer greater throughput and lower CPU utilization for virtual machines. They are best suited for environments in which guest applications are very I/O intensive. VMware requires that you create a primary adapter for use with a disk that will host the system software (boot disk) and a separate PVSCSI adapter for the disk that will store user data, such as a database. The primary adapter will be the default for the guest operating system on the virtual machine. For example, a virtual machine with Microsoft Windows 2008 guest operating systems, LSI Logic is the default primary adapter. The PVSCSI driver is similar to vmxnet in that it is an enhanced and optimized special purpose driver for VM traffic and works with only certain Guest OS verision that currently include Windows Server 2003, 2008 and RHEl 5. It can also be shared by multiple VMs running on a single ESX, unlike the VMDirectPath I/O which will dedicate a single adaptor to a single VM.