In Do Users Have a Negative Perception of Desktop Virtualization?, James Rankin brought up a set of issues that arise whenever a new platform is deployed in an organization. Those issues revolve around the fact that users tend to then blame all problems with user experience upon the new platform, even if those problems had existed prior to the deployment of the new platform. In the case of a Citrix or VMware VDI deployment, this takes the form of “Citrix is slow” or “View is slow.” Continue reading Addressing Users’ VDI Performance Concerns
We all understand what it means to virtualize CPU and memory (compute). This is what VMware vSphere and Microsoft Hyper-V have been doing for years. We are starting to get our arms around what it means to virtualize networking and storage, as VMware progresses down its path to virtualize all of the key resources in the data center as a part of its software-defined data center strategy. Now, along comes Intigua with an offering that virtualizes the management stack in your virtualized data center. Continue reading News: Intigua Virtualizes the Management Layer
VMTurbo has announced a new version of its VMTurbo Operations Manager that extends its ability to automatically ensure workload resource allocation by taking control actions at both the physical storage layer and the converged fabric layer.
VMTurbo is a unique vendor in the Operations Management space in that they allow you to specify the priorities of your workloads, and then VMTurbo automatically tells you what actions to take to ensure that the highest priority workloads get the resources (and therefore the performance) that they need. If you turn on the automation (which most customers do) VMTurbo will even execute these recommended actions for you. For example VMTurbo might change the amount of virtual memory or the number of virtual CPU’s allocated to a workload. Or VMTurbo might use VMware Storage I/O control to ensure that a particular workload gets the storage bandwidth that it needs. However, historically the actions that VMTurbo has been able to take have been constrained by whatever control API’s were available in the virtualization platform upon which VMTurbo was running.
Today’s VMTurbo Announcement
Today VMTurbo has announced two new modules that extend its automated control actions into the physical hardware:
- The VMTurbo Storage Control Module – VMTurbo’s Storage Control Module ensures applications get the storage performance they require to operate reliably while enabling efficient use of storage infrastructure – thus preventing unnecessary over provisioning. This module helps users solve their pressing storage performance and cost challenges, maximize their existing storage investments and embrace the adoption of advanced features and packaging such as NetApp Clustered Data ONTAP (cluster mode) and FlexPod. For more detailed information on VMTurbo Storage Control Module, visitwww.vmturbo.com/storage-resource-management.
- The VMTurbo Fabric Control Module – Modern compute platforms and blade servers have morphed to fabrics unifying compute, network, virtualization and storage access into a single integrated architecture. Furthermore, fabrics like Cisco (CSCO) UCS form the foundation of a programmable infrastructure for today’s private clouds and virtualized data centers, the backbone of converged infrastructure offerings from VCE vBlock and NetApp FlexPod. With the addition of this Fabric Control Module, VMTurbo’s software-driven control system ensures workloads get the compute and network resources they need to perform reliably while maximizing the utilization of underlying blades and ports. For more detailed information on VMTurbo Fabric Control Module, visit www.vmturbo.com/ucs-management.
The complete VMTurbo announcement is available here.
Strategic Implications of this VMTurbo Announcement
In “VMware Rejoins the Automated Service Assurance Debate“, we discussed the two known approached to automated service assurance. One approach is to collect monitoring metrics, interpret them with an analytics engine, find the anomalies, and then take action based upon the anomalies in the metrics. We pointed out the challenges in making the leap from an anomalous metric to the correct action as most metrics do not carry with them the context that allows that automated action to occur. For example if you only know that a spindle on an array is being over-taxed and is resulting in high latency, you really cannot automatically fix that problem unless know which workloads are causing that contention to occur.
VMTurbo embodies the opposite approach which can best be characterized by good dental hygiene. By ensuring that each important workload gets the resources that it needs, the contention never occurs in the first place, making the entire process of walking backwards up the root cause chain unnecessary. These new capabilities move into the area of ensuring that not only are virtual resources allocated correctly, but that physical ones (especially really expensive ones like enterprise class storage and UCS capacity) are also allocated correctly.
There is one more potentially very interesting long term impact to what VMTurbo is doing here. If VMTurbo is successful in automating resource allocation up and down the stack (expanding in both directions over time) then it could establish itself as a crucial layer of automation that is independent of hypervisors. It would be logical for VMTurbo to extend its automation into other storage arrays, other converged infrastructures, and then up into the questions of application response time and throughput. None of the hypervisor vendors show any intent of doing things like this, which gives VMTurbo a clean runway to in fact establish itself as such a layer. If this happens then VMTurbo will be the first but not the last vendor to establish a layer of automation independent of the hypervisor and that will change our industry in very profound ways.
VMTurbo has extended its ability to ensure that important workloads get the right resources to include automatic software based control of NetApp storage resources and Cisco UCS resources. This is a breakthrough in automated control systems for highly dynamic environments, and may well become an essential capability for the management of the forthcoming Software Defined Data Center.
In Understanding the Value of Unique Management Data, we explored the difference between unique data and commodity data as it pertains to the value of a monitoring solution. In Real-Time Monitoring: Almost Always a Lie, we explored the difference between real-time data collection and near real-time processing of non real-time data. In this post, we take a step back and explore what data we need and how we need to collect it to manage the software-defined data center (SDDC) and the cloud. Continue reading SDDC and Cloud Management Data Collection
In Understanding the Value of Unique Management Data, we pointed out that tools that collect unique data about the performance of infrastructure and applications are more likely to be able to provide you the value you want than tools that just rely on commodity data. In this post, we expose the most frequent marketing lie in the management software industry. Continue reading Real-Time Monitoring: Almost Always a Lie
Hundreds of companies and products monitor and manage various elements of your data center and your clouds. But most of these products rely on commonly available management data that is accessed via either industry-standard APIs or management APIs provided by various vendors. A few products do the extra work to collect unique data, and these products will be the focus of this article. Continue reading Understanding the Value of Unique Management Data