Microsoft is preparing to launch a new range of GPU-enabled virtual machines. Built using NVIDIA Tesla-series M60 and K80 GPUs, the new virtual machines offer the fastest GPUs available in the public cloud. This move leapfrogs Azure over AWS in both performance and number of supported platforms.
Articles Tagged with GPU
At the GPU Technology Conference, NVIDIA CEO Jen-Hsun Huang and Tesla CEO Elon Musk talked about the security of a car. Musk stated that physical access is still required to hack most vehicles and that critical systems such as brakes and steering are segregated from the control display. This got me thinking about the security of the next generation of Internet of Things (IoT) devices.
There is a growing movement to abstract hardware completely away, as we have discussed previously. Docker with SocketPlane and other application virtualization technologies are abstracting hardware away from the developer. Or are they? The hardware is not an issue, that is, until it becomes one. Virtualization may require specific versions of hardware, but these are commonplace components. Advanced security requires other bits of hardware, and those are uncommon; many servers do not ship with some of this necessary hardware. Older hardware may not deliver the chipset features needed to do security well. This doesn’t mean it can’t be done, but the overhead is greater. Hardware is dead to some, but not to others. This dichotomy drives decisions when buying systems for clouds or other virtual environments of any size. The hardware does not matter, until it does!
The last mile of virtualization has multiple dimensions, based on where you are going with virtualization. When you ask about the last mile of virtualization—about what it will take to get to 100% virtualized either within your data center or within a cloud (hybrid, public, or private)—the “it depends” answer is the one you will get most often. So, what will it take to get to 100% virtualized?
Let me start out by saying that I see more discussions about greenfield deployments of products than I do of migration/integration by vendors. Personally, I think there is no such thing as a greenfield deployment unless the organization is just starting out, creating a brand new data center, or perhaps has money to waste. In most cases, what is defined as greenfield is really just a grain of sand on an island of technology that still needs to integrate into the greater organization. As such, that integration should be the foremost thought when products are developed. But instead, it is not, and effort goes into becoming that island or into a replacement install that is still an island.
In March 2013, Citrix announced they had GPU sharing working and available for XenApp (multi-user/RDS). In December 2013, they announced it was available for XenDesktop (multi-OS/VDI). This has been a major barrier to adoption for many companies that need the ability to deliver a high-end multimedia experience to their end users in order to gain acceptance for adoption.