I have written many times about the need for application-centric data protection and data-centric security. But what these both require is that our data protection, security, management, and networking are data-aware. We use applications, but we thrive on data. The more data we have, the more chance we can make use of it, which has resulted in big data tools and big data extensions, even to hypervisors. We talk constantly about moving data closer to processing, with flash and other techniques at the storage layer. But we have not grown other aspects of our systems to be data-aware. It is time this changed. Continue reading Data-Aware Services: Oh, the Places We Could Go!
In the past, I have written about the next generation of data protection, which combines analytics with broader data and holistic system protection into one easy-to-use product (or set of products). The goal is to take disaster recovery to the future, when we will be able to restore and test our restores of not just our data, but also the systems required to make that data accessible, including all networking and security constructs. If you were to have a massive disaster, could your disaster recovery techniques restore your entire environment at just a push of a button? Does your disaster recovery testing feed back into analytics to determine what needs to change to make this a reality? Continue reading Next-Generation Data Protection: Realities
On the August 7 Virtualization Security podcast, we discussed how people in virtualization, security, compliance, data protection, storage, and networking—and everyone else in IT—should form their own organizational communities to improve overall communication and establish easy access to experts in those fields. This thought came out of a conversation I had with @jtroyer about whether or not IT should be a community instead of seeing its various components as silos. Even to this day, we are seeing more silos and fewer communities. The lines have just been drawn differently. Continue reading Building Your Own IT Community
Over the last few weeks, I have been taking a hard look at various data protection tools to determine if they meet the goals for the next generation of tools. Those goals are quite interesting, actually, the main goal being application-centric backup with increased visibility into our methodologies. We need to know not only how well any backup, replica, and recovery operation meets our SLAs, but also whether or not all our data is actually available. This includes determining if there are any dependencies for an application as well as taking a comprehensive look at all the different forms of data protection. The other major goal for the next generation of tools is to preclude the need for a human element: in essence, we need to provide data protection without needing a human to set it up for us.
During a recent Twitter conversation about disaster recovery and business continuity testing, I began to consider how we communicate during a disaster. We do so not with normal communication methods, but more often than not with an interrupting form of communication—one in which constant requests for updates, criticisms, and outright demands for attention are directed at those who are doing the work of recovering a system. During a disaster recovery effort, communication breaks down. Why? Generally, not enough testing has been performed to document communication issues or any other types of issues. How can we improve this communication, or even get the proper people involved, when six feet of snow, water, or mud surrounds our place of work? Continue reading Disaster Recovery Communication
Attending Gigaom Structure was an exercise in getting fire-hosed with the leading edge innovation that public cloud providers are bringing to their customers worldwide. These innovations not only will have a profound effect on public cloud computing, but also will ultimately impact data center architectures, costs, and benefits worldwide.