We are currently midway through an information and digitization revolution that could be possibly compared to the mechanical impact of the Industrial Revolution. Despite the many great advances it brought, the Industrial Revolution had harmful impacts on the environment and working conditions, among other areas. It took 150 years or more for the issues from the Industrial Revolution to be recognized and addressed with legislation, by which time untold damage had been done. Similar things are possible with the current “information revolution.” The big issues of today are concerns over privacy and data protection. Given that Europe has a history of personal information being used to enforce totalitarianism and even genocide, it is no surprise to find that Europe is on the forefront of making data protection legislation more suitable to the age we currently live and work in.
Data Protection is much more than verifying that you have a valid backup. While the recovery of your data is important, Data Protection also encompasses data life cycle management, business continuity, disaster recovery, and continuous data protection as they pertain to virtualized and cloud environments. This topic also examines how to secure and monitor the passing of data between disparate environments and how to increase the scale of data to be protected in shorter periods of time. (Read More)
How to manage the security and protection of your environment in order to safeguard your crown jewels has always been important. However, it has never been more so than today, when data-breach announcements are common and everyone from nation-states to teenagers in their bedrooms have access to powerful tools for breaking in.
As I read the “we solve ransomware” emails in my inbox and saw comments on Twitter and Slack, I started to think about how to solve ransomware once and for all. It sounds like a difficult task, but I think it is all about an architecture: an architecture that uses modern ideas. A solution needs to combine security with data protection. I have written about detecting ransomware before, but now we need to find a way to include everything we know to ensure institutions can recover quickly from a new attack while preventing known attacks. This concept came to fruition at VeeamON 2017, and I briefly spoke about it on The Cube. Now it is time to put everything together.
In one of my more recent articles, I brought attention to the release, or better yet, the data dump, of exploits and hacking tools targeting Microsoft’s Windows OS, Linux, firewalls, and others. One of the main purposes of my post was to bring attention to the grave dangers that these exploits bring to the world. As such, I really hoped that there would be enough interest from individuals in the industry for them to get a copy of the exploits and contribute to the countermeasures needed to better protect and defend the companies and corporations we all represent. I was absolutely sure that there would be many individuals around the world who would reverse engineer the exploits for more devious purposes. We have just experienced the first of what I believe will be multiple attacks unleashed across the globe.
I was reading a Reddit request for help regarding ransomware. The title was “Got hit BAD tonight.” That title describes the catastrophe simply and to the point. The ransomware in question attacked the hypervisor. Then, it proceeded to encrypt all backups and other systems connected to the hypervisor. This is the exact issue that virtualization and cloud security folks talk about daily with others. This is the ultimate in admin escape. This was not an escape-the-VM; this was an admin escape. The rule for accessing the hypervisor directly is DO NOT. The rule for using administrator credentials to do anything is DO NOT. Admin escape counts on those mistakes being made. Even so, there is a ton we can learn from this episode. I feel for the target, but it is time to quickly learn and implement better protections within your own environments. They are targets as well.
A bane of having data is the need to know: the need to know where all your sensitive data resides, what that data is, who has accessed it, and how it was accessed. Managing the who, what, where, why, and how of data is a struggle that’s as old as time. Scale changes this struggle. We continue our scale discussion on the Virtualization and Cloud Security podcast by delving into data management. Paula Long, CEO and cofounder of DataGravity, joins us to discuss data management at scale. How do we answer these questions?
The recent Amazon Web Services Simple Storage Service (S3) outage has taught us quite a bit about fragile cloud architectures. While many cloud providers will make hay during the next few weeks, current cloud architectures are fragile. Modern hybrid cloud architectures are fragile. We need to learn from this outage to design better systems: ones that are not fragile, ones that can recover from an outage. Fragile cloud is not a naysayer: it is a chance to do better! What can we do better?