Tag Archives: VMTurbo

VMworld US 2015: Day 4 Recap

vmworld2015Welcome to The Virtualization Practice’s week-long coverage of VMworld US 2015. Tune in all week for our daily recap of the major announcements and highlights from the world’s premier virtualization and cloud conference.

With all the forward-looking business out of the way (see the Day 1, Day 2, and Day 3 recaps) VMworld took a breath yesterday and focused on other parts of the ecosystem. The first annual Developer Day was held as part of the VMworld DevOps program track, and included a Hackathon where coders and non-coders could compete for prizes. Non-coders had a series of increasingly difficult challenges to complete. Coders worked to create the most useful, creative, and complex tools & services on vCloud Air, judged at the end of the day, and awarded prizes like a guitar signed by Alabama Shakes and the Neon Trees, the VMworld Party bands.

The VMworld DevOps program track was new this year, and a welcome addition for folks that want to know more about DevOps but cannot justify a separate trip to a conference like PuppetConf. There were sessions every day, all day, by heavy hitters in the industry. Kit Colbert, VMware CTO of Cloud-Native Apps, kicked it off on Monday, with presentations by Andrew Shafer (co-founder of Puppet), Jez Humble (VP of Chef), Steve Herrod (former CTO of VMware), all focused on the larger topics of DevOps. Between those there were in-the-trenches presentations from folks like Fabio Rapposelli and Massimo Re Ferre on using tools and techniques to get things done. The DevOps track stayed fairly agnostic, offering sessions on using Puppet, Chef, and Ansible (all competing configuration management technologies), as well as sessions on the VMware Integrated OpenStack modules, using Jenkins for continuous delivery, and some of the VMware-specific tools like vRealize Code Stream.

These are the sorts of technologies that are changing the landscape of IT. While CEOs and COOs and CTOs are on stage speaking about the big trends in IT it’s the continuous baby steps of the people in the industry that actually make those trends happen. The right person deciding to attend a DevOps session, because there’s no barrier to entry once you’re in the conference, can change the future of an organization. Similarly, there were a number of other smaller announcements from VMware that are pretty important, but didn’t get a lot of attention:

VMware vRealize Operations 6.1 (vROps) got the ability to suggest workload placements based on business and technical rules. This doesn’t seem very interesting at first glance, but this is a large problem for environments with a variety of vSphere clusters. It’s also a problem that competitors like VMturbo have already solved. Furthermore, vROps will now be able to natively monitor operating systems and applications. Previously that was possible through VMware Hyperic, a hard to use product with terrible adoption rates (for good reason) that is all but dead now.

VMware Integrated OpenStack 2 adds a number of OpenStack components, like load balancing, Ceilometer, and Heat Auto Scaling. That’s not what’s interesting, though. The most interesting part of the announcement are the words “industry first seamless upgrade capability.” OpenStack suffers from a few particular problems that keep customers away, and hellish or impossible upgrades are one of them. Other vendors, like Piston Cloud, have already solved the upgrade problem, so if VMware wanted to be serious about OpenStack they’d need to solve it, too. And it looks like they’re serious, which is good.

VMware vSphere APIs for IO Filtering sounds pretty boring, but it’s a diamond buried under the mountain of other, seemingly sexier announcements. VMware worked with their partner SanDisk to create an I/O filtering layer similar to that of Microsoft’s Minifilter APIs in Windows. These new APIs allow software to hook directly into the I/O path of a VM, meaning that third-party software can intercept & work with I/O directly. For SanDisk, this opens the door for their FlashSoft caching products to integrate very closely and very efficiently with vSphere (and look, they’ve announced just that!). For others, this means DR replication might now be free of the hated & awful VM snapshot, or the quirky and unreliable changed block tracking, both vestiges of a pre-vSphere era. With the filtering APIs a replication product can just insert itself in the I/O stream and copy all I/O as it is happening, without having to figure out what changed at a later date. Look for synchronous mirroring to appear natively in VMware vSphere Replication as a result.

VMware Site Recovery Manager 6.1 now integrates with NSX 6.2, which is sort of a no-brainer, but it also gained the ability to work with vSphere’s Storage Policy-based Management to allow workloads to be automatically protected depending on where they’re placed. That’s also a no-brainer, but it’s huge, and it hasn’t been done by VMware before.

Much of this is evidence that, under Pat Gelsinger, VMware is once again interested in following through on their commitments, creating quality products that integrate well with each other and add visible & tangible value to IT. Industry pundits trapped in the groupthink of Silicon Valley keep saying that VMware is dying, but this sort of thing is how VMware will keep customers in the fold. Making it easy for an organization to run their workloads well and seamlessly transition (or not) to the public cloud where appropriate is huge in the eyes of both CIOs and in-the-trenches IT staff.

Last, of course, was the VMworld Party! All big conferences have parties and bands, and the Neon Trees and Alabama Shakes did a great job of entertaining the crowds at AT&T Park.

Join us again tomorrow for a wrap up of the whole VMworld US 2015 conference, including highlights and key takeaways.

VMworld US 2015: Day 3 Recap

vmworld2015Welcome to The Virtualization Practice’s week-long coverage of VMworld US 2015. Tune in all week for our daily recap of the major announcements and highlights from the world’s premier virtualization and cloud conference.

VMworld US 2015 continued yesterday, kicked off by the general session. End-User Computing’s Sanjay Poonen led the keynote, in which VMware fleshed out what it means by “any application and any device” within the “Ready for Any” theme of the conference. Beginning with the VMware Workspace Suite, VMware talked at length about the growth of mobile computing and how AirWatch, together with VMware App Volumes, enables IT to manage all Windows 10 devices (physical and virtual, mobile or not), as well as iOS and Android devices, from a single pane of glass. Foreshadowing the next speaker, Poonen wrapped up his portion by talking about the synergies between AirWatch, Horizon, and NSX, with policy settings in NSX affecting and being affected by AirWatch connectivity and data access.

Continue reading VMworld US 2015: Day 3 Recap

Scale-Out Is a Benefit to HyperConverged

DataCenterVirtualizationI recently upgraded my nodes from 96 GB of memory to 256 GB of memory, and someone on Twitter stated the following:

@Texiwill thought the trend today is scale out not scale up? #cloud

The implication was that you never upgrade your hardware: you buy new or you enter the cloud. Granted, both options are beneficial. However, buying new and adding to your environment may not be necessary, and you most likely have already entered the cloud with the use of SaaS applications and perhaps some IaaS. The question still remains: upgrade, enhance existing hardware, or buy net new somewhere? When should you do any of these? Or should you at all? Continue reading Scale-Out Is a Benefit to HyperConverged

Addressing Users’ VDI Performance Concerns

PerformanceManagementIn Do Users Have a Negative Perception of Desktop Virtualization?, James Rankin brought up a set of issues that arise whenever a new platform is deployed in an organization. Those issues revolve around the fact that users tend to then blame all problems with user experience upon the new platform, even if those problems had existed prior to the deployment of the new platform. In the case of a Citrix or VMware VDI deployment, this takes the form of “Citrix is slow” or “View is slow.”   Continue reading Addressing Users’ VDI Performance Concerns

News: VMTurbo Extends Software Driven Control into the Storage and UCS Fabric Layers

PerformanceManagementVMTurbo has announced a new version of its VMTurbo Operations Manager that extends its ability to automatically ensure workload resource allocation by taking control actions at both the physical storage layer and the converged fabric layer.

VMTurbo Background

VMTurbo is a unique vendor in the Operations Management space in that they allow you to specify the priorities of your workloads, and then VMTurbo automatically tells you what actions to take to ensure that the highest priority workloads get the resources (and therefore the performance) that they need. If you turn on the automation (which most customers do) VMTurbo will even execute these recommended actions for you. For example VMTurbo might change the amount of virtual memory or the number of virtual CPU’s allocated to a workload. Or VMTurbo might use VMware Storage I/O control to ensure that a particular workload gets the storage bandwidth that it needs. However, historically the actions that VMTurbo has been able to take have been constrained by whatever control API’s were available in the virtualization platform upon which VMTurbo was running.

Today’s VMTurbo Announcement

Today VMTurbo has announced two new modules that extend its automated control actions into the physical hardware:

  • The VMTurbo Storage Control Module – VMTurbo’s Storage Control Module ensures applications get the storage performance they require to operate reliably while enabling efficient use of storage infrastructure – thus preventing unnecessary over provisioning.  This module helps users solve their pressing storage performance and cost challenges, maximize their existing storage investments and embrace the adoption of advanced features and packaging such as NetApp Clustered Data ONTAP (cluster mode) and FlexPod. For more detailed information on VMTurbo Storage Control Module, visitwww.vmturbo.com/storage-resource-management.
  • The VMTurbo Fabric Control Module – Modern compute platforms and blade servers have morphed to fabrics unifying compute, network, virtualization and storage access into a single integrated architecture.  Furthermore, fabrics like Cisco (CSCO) UCS form the foundation of a programmable infrastructure for today’s private clouds and virtualized data centers, the backbone of converged infrastructure offerings from VCE vBlock and NetApp FlexPod.  With the addition of this Fabric Control Module, VMTurbo’s software-driven control system ensures workloads get the compute and network resources they need to perform reliably while maximizing the utilization of underlying blades and ports. For more detailed information on VMTurbo Fabric Control Module, visit www.vmturbo.com/ucs-management.

The complete VMTurbo announcement is available here.

Strategic Implications of this VMTurbo Announcement

In “VMware Rejoins the Automated Service Assurance Debate“, we discussed the two known approached to automated service assurance. One approach is to collect monitoring metrics, interpret them with an analytics engine, find the anomalies, and then take action based upon the anomalies in the metrics. We pointed out the challenges in making the leap from an anomalous metric to the correct action as most metrics do not carry with them the context that allows that automated action to occur. For example if you only know that a spindle on an array is being over-taxed and is resulting in high latency, you really cannot automatically fix that problem unless know which workloads are causing that contention to occur.

VMTurbo embodies the opposite approach which can best be characterized by good dental hygiene. By ensuring that each important workload gets the resources that it needs, the contention never occurs in the first place, making the entire  process of walking backwards up the root cause chain unnecessary.  These new capabilities move into the area of ensuring that not only are virtual resources allocated correctly, but that physical ones (especially really expensive ones like enterprise class storage and UCS capacity) are also allocated correctly.

There is one more potentially very interesting long term impact to what VMTurbo is doing here. If VMTurbo is successful in automating resource allocation up and down the stack (expanding in both directions over time) then it could establish itself as a crucial layer of automation that is independent of hypervisors. It would be logical for VMTurbo to extend its automation into other storage arrays, other converged infrastructures, and then up into the questions of application response time and throughput. None of the hypervisor vendors show any intent of doing things like this, which gives VMTurbo a clean runway to in fact establish itself as such a layer. If this happens then VMTurbo will be the first but not the last vendor to establish a layer of automation independent of the hypervisor and that will change our industry in very profound ways.


VMTurbo has extended its ability to ensure that important workloads get the right resources to include automatic software based control of NetApp storage resources and Cisco UCS resources. This is a breakthrough in automated control systems for highly dynamic environments, and may well become an essential capability for the management of the forthcoming Software Defined Data Center.

Software Defined Data Center Analytics

SDDC.Management.Stack.Reference.ArchitectureMoving the configuration of the environment from the hardware that supports the environment to a layer of software which can collectively manage all of the storage, networking, compute, and memory resources of the environment is one of the main points of the SDDC. Once all of the configuration of the data center is moved into software, and some of the execution of the work is moved into software, SDDC Data Center Analytics will play a critical role in keeping your SDDC up and running with acceptable performance. Continue reading Software Defined Data Center Analytics