My last article reported on the virtualization vendors’ fourth quarter results, and this post reports on news from the storage industry. My sources are Cleveland Research and individual company reports from CRC and FactSet Estimates.
Articles Tagged with NetApp
A week later than some people predicted, the news has broken. NetApp has bought SolidFire for $870M. This continues the trend for established storage companies to acquire start-ups with great flash products rather than building their own. SolidFire initially targeted the service provider market with its scale-out all-flash array. In the last year or so, SolidFire has taken aim at enterprise data centers as well. In both of these markets, NetApp is well established as a vendor of disk-based storage arrays. However, it has been losing market share due fundamentally to a lack of a modern solid-state product. The FlashRay product that NetApp developed has failed, leaving NetApp no option but to acquire.
Veaam is forging a series of interesting agreements with competitors as well as infrastructure players. It has also added into its core product features considered to be more legacy than future, such as tape support. In essence, it is becoming the center of the data protection space within any organization. Veeam Availability Suite augments existing sets of tools to let them do more than they could alone. Veeam has founded its own ecosystem.
VMTurbo has announced a new version of its VMTurbo Operations Manager that extends its ability to automatically ensure workload resource allocation by taking control actions at both the physical storage layer and the converged fabric layer.
VMTurbo is a unique vendor in the Operations Management space in that they allow you to specify the priorities of your workloads, and then VMTurbo automatically tells you what actions to take to ensure that the highest priority workloads get the resources (and therefore the performance) that they need. If you turn on the automation (which most customers do) VMTurbo will even execute these recommended actions for you. For example VMTurbo might change the amount of virtual memory or the number of virtual CPU’s allocated to a workload. Or VMTurbo might use VMware Storage I/O control to ensure that a particular workload gets the storage bandwidth that it needs. However, historically the actions that VMTurbo has been able to take have been constrained by whatever control API’s were available in the virtualization platform upon which VMTurbo was running.
Today’s VMTurbo Announcement
Today VMTurbo has announced two new modules that extend its automated control actions into the physical hardware:
- The VMTurbo Storage Control Module – VMTurbo’s Storage Control Module ensures applications get the storage performance they require to operate reliably while enabling efficient use of storage infrastructure – thus preventing unnecessary over provisioning. This module helps users solve their pressing storage performance and cost challenges, maximize their existing storage investments and embrace the adoption of advanced features and packaging such as NetApp Clustered Data ONTAP (cluster mode) and FlexPod. For more detailed information on VMTurbo Storage Control Module, visitwww.vmturbo.com/storage-resource-management.
- The VMTurbo Fabric Control Module – Modern compute platforms and blade servers have morphed to fabrics unifying compute, network, virtualization and storage access into a single integrated architecture. Furthermore, fabrics like Cisco (CSCO) UCS form the foundation of a programmable infrastructure for today’s private clouds and virtualized data centers, the backbone of converged infrastructure offerings from VCE vBlock and NetApp FlexPod. With the addition of this Fabric Control Module, VMTurbo’s software-driven control system ensures workloads get the compute and network resources they need to perform reliably while maximizing the utilization of underlying blades and ports. For more detailed information on VMTurbo Fabric Control Module, visit www.vmturbo.com/ucs-management.
The complete VMTurbo announcement is available here.
Strategic Implications of this VMTurbo Announcement
In “VMware Rejoins the Automated Service Assurance Debate“, we discussed the two known approached to automated service assurance. One approach is to collect monitoring metrics, interpret them with an analytics engine, find the anomalies, and then take action based upon the anomalies in the metrics. We pointed out the challenges in making the leap from an anomalous metric to the correct action as most metrics do not carry with them the context that allows that automated action to occur. For example if you only know that a spindle on an array is being over-taxed and is resulting in high latency, you really cannot automatically fix that problem unless know which workloads are causing that contention to occur.
VMTurbo embodies the opposite approach which can best be characterized by good dental hygiene. By ensuring that each important workload gets the resources that it needs, the contention never occurs in the first place, making the entire process of walking backwards up the root cause chain unnecessary. These new capabilities move into the area of ensuring that not only are virtual resources allocated correctly, but that physical ones (especially really expensive ones like enterprise class storage and UCS capacity) are also allocated correctly.
There is one more potentially very interesting long term impact to what VMTurbo is doing here. If VMTurbo is successful in automating resource allocation up and down the stack (expanding in both directions over time) then it could establish itself as a crucial layer of automation that is independent of hypervisors. It would be logical for VMTurbo to extend its automation into other storage arrays, other converged infrastructures, and then up into the questions of application response time and throughput. None of the hypervisor vendors show any intent of doing things like this, which gives VMTurbo a clean runway to in fact establish itself as such a layer. If this happens then VMTurbo will be the first but not the last vendor to establish a layer of automation independent of the hypervisor and that will change our industry in very profound ways.
VMTurbo has extended its ability to ensure that important workloads get the right resources to include automatic software based control of NetApp storage resources and Cisco UCS resources. This is a breakthrough in automated control systems for highly dynamic environments, and may well become an essential capability for the management of the forthcoming Software Defined Data Center.
By Greg Schulz, Server and StorageIO @storageio
Keeping in mind that the best server and storage IO is the one that you do not have to do, then second best is that which has the least impact combined with best benefit to an application. This is where SSD, including DRAM- and NAND-flash-based solutions, comes into the conversation for storage performance optimization.
The question is not if, but rather when, where, what, and how much SSD (NAND flash or DRAM) you will have in your environment, either to replace or to complement HDDs.