Disclosure. I work for IBM's storage division, but am not involved in this NVIDA collaboration in any way. This is a personal blog, reflecting my own opinions and observations.
Today, I was rather surprised to see announcements about this new NVLink. Can you imagine a shared (a vGPU/vSGA) GPU ready to be leveraged by multiple VMs? Maybe for VMware and Hyper-V? Tantalazing thoughts, but we'll likely have to wait until 2016 to really learn more, once it's actually introduced. Just thought you might be interested, given I tend to dwell on thinking ahead about things that'll relieve the bottlenecks in the typical home lab. This one appears to be a bit more enterprise-y at launch, discussed here, but isn't that the way a lot of stuff eventually makes its way into our own labs, eventually?
NVIDIA will add NVLink technology into its Pascal GPU architecture -- expected to be introduced in 2016 -- following this year's new NVIDIA Maxwell compute architecture. The new interconnect was co-developed with IBM, which is incorporating it in future versions of its POWER CPUs.
"NVLink technology unlocks the GPU's full potential by dramatically improving data movement between the CPU and GPU, minimizing the time that the GPU has to wait for data to be processed," said Brian Kelleher, senior vice president of GPU Engineering at NVIDIA.
Nvidia and IBM see NVLink as a competitor to PCI Express 3.0. Most of today’s GPUs are connected to x86-based CPUs through the PCIe interface, which limits the GPU’s ability to access the CPU memory system. NVLink solves this problem by matching the bandwidth of typical CPU memory systems, letting GPUs access CPU memory at its full bandwidth.
Nvidia and IBM have developed an interconnect that will be integrated into future graphics processing units, letting GPUs and CPUs share data five times faster than they can now, Nvidia announced today. The fatter pipe will let data flow between the CPU and GPU at rates higher than 80GB per second, compared to 16GB per second today.