Although it is becoming less interesting over time, the hypervisor is still the cornerstone of the modern data center. As we enter the age of the hybrid cloud, that data center is stretching into the cloud. With the rise of containers, we are seeing clouds move to bare metal once more. While this works for new applications, it does not necessarily work for existing ones. Through 2017, the hypervisor will still be important to the data center and to many clouds. After 2017, we will see; it depends on the impact of many new technologies. Here is our 2016–2017 cost comparison spreadsheet.
Articles Tagged with KVM
I was tinkering around with XenServer the other day. I know I can hear you saying “is that a thing?” Well, it is, but this is not what I am going to talk about today. Time for a tangent shift. I thought I would have a look for a third-party switch for XenServer, but it seems that XenServer is a third-rate citizen in this space, as there is no Cisco Nexus 1kV available for XenServer, even though Cisco previewed it at Citrix Synergy Barcelona in 2012.
Nutanix, one of the fastest growing IT infrastructure startups around, shows no signs of slowing down with their release of Nutanix OS 3.5. For those not familiar with Nutanix, they offer a truly converged virtualized infrastructure. This generally consists of four nodes in two rack units of space, where each node has CPU, RAM, traditional fixed disk, SSD, and Fusion-IO flash built in. Their secret sauce is really NDFS, the Nutanix Distributed File System, built by the same folks that created Google’s File System, as well as a unified, hypervisor-agnostic management interface.
The OpenStack Summit this week continued to fan the flames of the software-defined data center. The software-defined data center is just a term for replacing traditional data center hardware functionality with the same features implemented in software, running on commodity x86 servers. While software-defined approaches to data center features are at least nominally less expensive than their hardware counterparts the real promise in the approach is flexibility and management ease with high levels of integration. Reconfiguring a network to support the security requirements of a new application is now just a function of software and APIs. Expanding storage is just simply adding another node with more storage attached, and the cluster compensates automatically. Even things like firewall rules and load balancer configurations can now be stored as templates along with the applications, to be provisioned in minutes.
Where is the center of IT Automation today? The network is the nerve center, the workloads are the brains, and the normal storage, compute, and memory are the other internal organs. The heart of our IT machine is often a management tool such as VMware vCenter, System Center, or something else entirely, but should our IT automation live within the heart when the heart only controls virtual and some physical components, or should IT automation be tied to the nervous system that crosses boundaries?
Moving to the cloud! Let me be a little more precise and say moving to the public cloud. This concept has really been embraced and thrives in the consumer market, but will this concept really take off in the corporate world, and really, should it? One of the main concepts of virtualization, in the beginning, was the ability to consolidate physical systems into a virtual environment to shrink the overall footprint, as well as to be able to take advantage of and use all available compute resources in a physical server, and to have centralized control of the computer, storage, and networking resources.