The Virtualization Practice was recently offline for two days, we thank you for coming back to us after this failure. The reason, a simple fibre cut that would have taken the proper people no more than 15 minutes to fix, but we were way down on the list due to the nature of the storm that hit New England and took 3M people off the grid. Even our backup mechanisms were out of power. While our datacenter had power, the rest of the area in our immediate vicinity did not. So not only were we isolated from reaching any clouds, but we were isolated from being reached from outside our own datacenter. The solution to such isolation is usually remote sites and location of services in other regions of a county, this gets relatively expensive for small and medium business, can the Hybrid Cloud help here?
Whether you use replication as a means of disaster avoidance or disaster recovery, replication of your virtual environment between hot sites has always been a win. With current technology it is even possible to replicate to a replication receiver cloud which could provide a measure of business continuity as well. So who are the players and who provides what service, and how do they do it?
There is some debate amongst backup vendors on what defines an agent, some consider any amount of scripting to be an agent, while others imply it is what does the data transfer plus any amount of scripting necessary. Is there a need for both Agent and Agent-less within a virtual environment? This also begs the question, who is responsible for properly handling the application whose data you are backing up?
This week I have been paying close attention to the developments of Hurricane Irene. In the beginning, Hurricane Irene looked like she was going visit Florida on her journey to the north. Even though it looked like Florida was going to get hit by this storm, it was still early and there was time for the storm to change course. It was also time to go out and make sure my Hurricane Supply Kit at least had the basics like batteries and flashlights as well as filling up the gas tanks of the cars. I have different levels of preparedness which depends on how close the storm is and the projected path. Just like I have steps in place to be prepared for the storm, most companies that I have worked for in Florida have a storm plan in place and like myself, do not sound the real alarm until the storm is 48 – 72 hours away from a hit but start to prepare for the alarm in case it is needed.
More and more is coming out about the attack from a MacDonald’s that left an organization crippled for a bit of time. The final tally was that the recently fired employee was able to delete 15 VMs before either being caught or he gave up. On twitter, it was commented that the administrator must not have been a powershell programmer because in the time it takes to delete 15 VMs by hand, a powershell script could have removed 100s. Or perhaps the ‘Bad Actor’ was trying to not be discovered. In either case, this has prompted discussions across the twitter-sphere, blog-sphere, and within organizations about how to secure from such attacks.
At the NE VMUG, while walking the floor I saw a new virtualization backup player, perhaps the first generic Replication Receiver Cloud: TwinStrata. And information gained while not at the NE VMUG. There is also a new virtualization backup player just for Hyper-V: Altaro. As well as a new release of Quest vRangerPro. The Virtualization Backup market is a very dynamic market with new ideas, technologies, and concepts being put into the market at every turn. In many ways, the market leaders are not the bigger companies but the smaller and fast growing companies. In the past, it was about features associated with pure backup, but now it is about features and fast disaster recovery and recovery testing.
Business Agility ...
• • 3 Comments
Just in time for the adoption of vSphere 5 by enterprises seeking to virtualize business critical and performance critical applications, AppFirst, BlueStripe, and ExtraHop have pioneered a new category of APM solutions. This new category is focused upon allowing IT to take responsibility for applications response time for every application running in production. This is an essential step on the road toward virtualizing the 60% of the applications that remain on physical hardware.
Security in the cloud and the virtual environment is ‘all about the data’ and not specifically about any other subsystem. It is about the data. As such the data has something it knows (the contents of the data), something it is (its signature), and something it has (its digital rights) and since it has these three elements, the data has all it has identity. However, protecting the data requires us to put things between the data and the real world such as firewalls, and complex role based access controls, as well as methods to replicate the data to other locations in a non-intrusive mechanism. The goal to such replication could be to ensure multiple sites have the same data (such as a hot-site) or to have the data available in another locations in case of disaster.
As a delegate for Tech Field Day 6 in Boston, I was introduced to SRM Replication as well as ZeRTO a third party replication tool. They seem to be as different as night and day but are they? Both work within the vSphere environment to replicate virtual disks regardless of storage type, and apparently hook into the same location within VMware’s API stack. This shows a maturity of VMware’s API stack that until now has been unknown and secret. In this one area, Microsoft Hyper-V is beating VMware vSphere: The availability of well known APIs that are easy for Third Parties to use. I now see a change in VMware’s behavior, can they continue this growth?