Data Center Virtualization

Data Center Virtualization covers virtualizing servers, networks, and storage delivering server consolidation, CAPEX savings, IT agility, and improved management. Major areas of focus include the tradeoffs between various virtualization platforms (VMware vSphere, Microsoft Hyper-V and Red Hat KVM), the evolution of hypervisors into data center management platforms, (Read More)

VMware’s Software Defined Data Center strategy, and how the SDDC is spurring innovation in storage, networking and server hardware. Covered vendors indlude VMware, Microsoft, Red Hat, CloudPhysics, Hotlink, Tintri, and VMTurbo.

Software Defined Data Center Cloud Management

SDDC.Management.Stack.Reference.ArchitectureThe entire purpose of constructing an Software Defined Data Center is to allow new data center services to be rapidly provisioned in response to business demands. But the business does not just want a data center service. The business wants and needs either a full development environment in support of custom application deployment, or a full business application delivered as a service. Cloud Management is the crucial layer of software that adds application level services to SDDC services to create solutions for the business. Continue reading Software Defined Data Center Cloud Management

My Thoughts on the VMware vCloud Hybrid Service

CloudComputingAn announcement was made last week about the new VMware vCloud Hybrid Service. This service will bring VMware Public Cloud Service to the masses later this year. There are a couple of related posts from our own Virtualization Practice analysts, which can be found here and here. Since there has been plenty of conversation about just what the vCloud Hybrid Service is, I am going to use this post to share my thoughts on the service itself.

Continue reading My Thoughts on the VMware vCloud Hybrid Service

VMware vCloud Hybrid Service

VMware100x30On Tuesday VMware announced their answer to the public cloud: the vCloud Hybrid Service (vCHS). One of the biggest hurdles for the roughly 500,000 VMware customers has been that their on-premise, private infrastructure isn’t directly interoperable with any sizable public clouds, like Amazon AWS or RackSpace. If you want to move towards a public or hybrid cloud model you need to add additional software, like Enstratius’ offerings or VMware’s own vCloud Automation Center. You could also use the vCloud Connector, but that relies on having another vCloud available. One of VMware’s frustrations has been the adoption rate of partners, most refusing to build full vCloud implementations, effectively trapping VMware customers inside their own data centers. Continue reading VMware vCloud Hybrid Service

Software Defined Data Center Analytics

SDDC.Management.Stack.Reference.ArchitectureMoving the configuration of the environment from the hardware that supports the environment to a layer of software which can collectively manage all of the storage, networking, compute, and memory resources of the environment is one of the main points of the SDDC. Once all of the configuration of the data center is moved into software, and some of the execution of the work is moved into software, SDDC Data Center Analytics will play a critical role in keeping your SDDC up and running with acceptable performance. Continue reading Software Defined Data Center Analytics

EMC ViPR as a Part of a SDDC

DataCenterVirtualizationAt EMC World 2013, EMC announced ViPR as the answer to storage within the software defined data center. ViPR presents multiple types of storage while segmenting the control plane from the data plane. In addition, ViPR is a head end, fronting traditional storage arrays as an automation and control point and does not replace any array, but, possibly, makes it easier to use those arrays as we move to the software defined data center. Yet, ViPR also raises several questions about how storage will be accessed by the software defined data center: is ViPR the future, or is there more to happen? Continue reading EMC ViPR as a Part of a SDDC

Tintri OS 2.0 & ReplicateVM

Tintri100x30 Tintri100x30As virtualization slowly takes over almost everything in information technology, certain things need to change. One of those things is the way storage operates. Traditional enterprise storage was built for a time when physical machines were king, and there was only one operating system, and often only one workload, per physical server. Virtualization changes that, putting multiple workloads and multiple OS images on a single host, often causing predictive algorithms for caching to fail because the I/O from a particular server looks almost completely random (sometimes referred to as the “I/O blender”). In fact, the I/O isn’t random, it’s just the result of multiple VMs each doing their own thing. Most monolithic storage vendors have adapted their arrays to better understand this new type of I/O, at least in part. However, there is a whole new class of storage company that is looking to start over, upending the storage market by pairing commodity hardware with deeper understandings of virtual environments and new management models. Continue reading Tintri OS 2.0 & ReplicateVM