Secure Agile Cloud Development takes Agile and DevOps to the next level. It is about code quality, based not just on what the developers test, but also on the application of continuous testing and on dynamic and static code analysis. Most importantly, it is about a repeatable and trackable process by which we can make code quality assessments. We can find out the “who did what, when, where, how, and why” of our code. It is a useful tool in incident response. Imagine a world in which our production environments are run entirely by code.
Articles Tagged with analytics
Let’s start the new year right with one of my current favorite topics for discussion: automation. In this article, I concentrate on the second-day operations type of automation. Second-day operations is quite a different beast from build and decommission automation, in that it incorporates several different approaches to automation.
After the Apollo 1 disaster, astronaut Frank Borman told Congress that the tragedy had not been caused by any one company or organization, but by the entirety of all those involved with the Mercury, Gemini, and Apollo missions. The problem had been a failure of imagination. They knew that at some point there would be a fire in a space capsule. However, they assumed it would take place in space somewhere. They just did not think about the possibility of fire while the capsule was still on earth. We call this failure of imagination “unknown unknowns” within the security world, but it boils down to the same thing. We just do not think about some things. Even with all the tools out there to help us, we have failures of imagination.
We all need performance and capacity management tools to fine tune our virtual and cloud environments, but we need them to do more than just tell us there may be problems. Instead, we need them to find root causes for problems, whether those problems are related to code, infrastructure, or security. The new brand of applications, if designed for the cloud à la Netflix, or older technologies instantiated within the cloud need more in order to tell us about their health. Into this breach comes a new set of tools, as well as an existing set of tools.
Have you noticed lately that the term “big data” is being used with increasing frequency? It seems that working with big data is one of the more desired and in-demand skill sets in the technology space. What you think “big data” is, and what do you think it represents? One definition to consider is this one from Wikipedia: “Big data usually includes data sets with sizes beyond the ability of commonly used software tools to capture, curate, manage, and process the data within a tolerable elapsed time.” So, who benefits the most from its use? Have you stopped to consider just what makes up big data? Let’s explore that question a little deeper.
In the past, I have written about the next generation of data protection, which combines analytics with broader data and holistic system protection into one easy-to-use product (or set of products). The goal is to take disaster recovery to the future, when we will be able to restore and test our restores of not just our data, but also the systems required to make that data accessible, including all networking and security constructs. If you were to have a massive disaster, could your disaster recovery techniques restore your entire environment at just a push of a button? Does your disaster recovery testing feed back into analytics to determine what needs to change to make this a reality?