I was tinkering around with XenServer the other day. I know I can hear you saying “is that a thing?” Well, it is, but this is not what I am going to talk about today. Time for a tangent shift. I thought I would have a look for a third-party switch for XenServer, but it seems that XenServer is a third-rate citizen in this space, as there is no Cisco Nexus 1kV available for XenServer, even though Cisco previewed it at Citrix Synergy Barcelona in 2012.
Articles Tagged with KVM
Nutanix, one of the fastest growing IT infrastructure startups around, shows no signs of slowing down with their release of Nutanix OS 3.5. For those not familiar with Nutanix, they offer a truly converged virtualized infrastructure. This generally consists of four nodes in two rack units of space, where each node has CPU, RAM, traditional fixed disk, SSD, and Fusion-IO flash built in. Their secret sauce is really NDFS, the Nutanix Distributed File System, built by the same folks that created Google’s File System, as well as a unified, hypervisor-agnostic management interface.
The OpenStack Summit this week continued to fan the flames of the software-defined data center. The software-defined data center is just a term for replacing traditional data center hardware functionality with the same features implemented in software, running on commodity x86 servers. While software-defined approaches to data center features are at least nominally less expensive than their hardware counterparts the real promise in the approach is flexibility and management ease with high levels of integration. Reconfiguring a network to support the security requirements of a new application is now just a function of software and APIs. Expanding storage is just simply adding another node with more storage attached, and the cluster compensates automatically. Even things like firewall rules and load balancer configurations can now be stored as templates along with the applications, to be provisioned in minutes.
Where is the center of IT Automation today? The network is the nerve center, the workloads are the brains, and the normal storage, compute, and memory are the other internal organs. The heart of our IT machine is often a management tool such as VMware vCenter, System Center, or something else entirely, but should our IT automation live within the heart when the heart only controls virtual and some physical components, or should IT automation be tied to the nervous system that crosses boundaries?
Moving to the cloud! Let me be a little more precise and say moving to the public cloud. This concept has really been embraced and thrives in the consumer market, but will this concept really take off in the corporate world, and really, should it? One of the main concepts of virtualization, in the beginning, was the ability to consolidate physical systems into a virtual environment to shrink the overall footprint, as well as to be able to take advantage of and use all available compute resources in a physical server, and to have centralized control of the computer, storage, and networking resources.
When enterprises consider putting business-critical workloads in public clouds, many of them overlook, at least in part, critical issues of economics and over what portion of their cloud software stack the cloud vendor has full control. This leads to a situation where sometimes relatively inexpensive offerings where the vendor has full control of their software stack (like Amazon EC2) are improperly compared to an offering from a vendor like Savvis or Terramark who is building a cloud out of either VMware-provided components or OpenStack components. This gives rise to important issues that drive both the cost of the respective offerings and the degree to which the cloud vendor can both enhance the offerings with rapid agility and quickly address service level issues.