The answer is to dramatically narrow the scope and set of enforcement actions for SOPA and PIPA so that they target just offshore sites engaged in large scale commercial piracy and so that the existing safe harbor for sites that take content from users is both maintained and formally recognized as an exception to the scope of SOPA and PIPA. This will ensure that law enforcement can go after the really bad actors, and that the many good and useful sites and are the basis of the “good Internet” are not collateral damage in these enforcement efforts.
Data Protection techniques should be implemented and tested long before they are needed. This is a necessary component of any IT organization. However, the most recent communities podcast brought to light several implementation aspects of Data Protection, specifically about Disaster Recovery: organizations still do not test their DR plans and organizations are waiting for a hardware refresh to implement a DR plan.
While the legacy enterprise management vendors might like to think of themselves as the Borg (prepare to be assimilated – there is no escape), the new technical requirements and the new buying patterns in the virtualization market do not lend themselves to a repeat of history. Legacy management vendors are unlikely to be able to acquire themselves into this market because their core platforms and business models do not work with the customers who are running virtualized environments and buying management solutions. So to my good friend Andi Mann, I respectfully disagree.
Data Protection is still an issue with many small businesses and smaller enterprises who virtualize; Specifically around the Data Protection Process and eventually where to store the data. When I speak to people they are struggling with whether or not to place the data on tape, blu-ray, into the cloud, or other disks. Medium and Large Enterprises already have such policies in place, but like everything else, when they virtualized the policies may have fallen by the road side and now need to be recovered, dusted off, and put into practice. The choice of where the data will ultimately reside when disaster strikes is an ongoing discussion in the virtualization community. Ultimately, Data Protection is just that, protecting the data from loss, destruction, and allowing for quick recovery.
2011 saw an increase in virtualized and cloud data protection solution partnerships and advancements. One of the biggest advancements is the growing support for Microsoft Hyper-V from long-time VMware specific backup solutions. Included in the new partnerships are team ups between performance management and data protection solutions, as well as an increase in the methods for replication and other forms of data protection. 2011 was a very big year in the Data Protection arena of cloud and virtualization. This is the 2011 Year in Review for data protection.
VMware has had a great 2011. Product execution was excellent on all fronts except for VMware View where there are also larger strategy issues afoot. VMware is and likely will remain next year not only the most important, but the best system software vendor on the planet. We can only look forward to continued progress with vSphere, the management offerings, and the applications platform offerings.
Data Protection is not just about backup these days, but instead concentrates on two all important concepts for a business: disaster recovery and business continuity. While backup is a part of Disaster Recovery, restoration is all important. If it is not possible to restore your data in a timely fashion the backup has failed. So technologies that allow us to access our data immediately provides a level of business continuity. But how is this achieved? So where do you save your critical data is is readily restorable? Is your backup integrated into your monitoring software? Have you tested your restore today?
As I was flying home recently, the gentleman beside me was talking about his need to do the “cloud thing” as a means to backup his data. He recently experienced a multi-retail shop backup failure where the local backup disk was corrupted and the backups failed to happen. I also experienced a backup failure, when my backup software was upgraded. In both cases, the backup software did not mail out, or alert the appropriate people of the failure. Even if the backups did work, the data was still corrupted. So the question is, how can cloud based backups help with either of these scenarios?
The Virtualization Practice was recently offline for two days, we thank you for coming back to us after this failure. The reason, a simple fibre cut that would have taken the proper people no more than 15 minutes to fix, but we were way down on the list due to the nature of the storm that hit New England and took 3M people off the grid. Even our backup mechanisms were out of power. While our datacenter had power, the rest of the area in our immediate vicinity did not. So not only were we isolated from reaching any clouds, but we were isolated from being reached from outside our own datacenter. The solution to such isolation is usually remote sites and location of services in other regions of a county, this gets relatively expensive for small and medium business, can the Hybrid Cloud help here?