Secure Agile Cloud Development takes Agile and DevOps to the next level. It is about code quality, based not just on what the developers test, but also on the application of continuous testing and on dynamic and static code analysis. Most importantly, it is about a repeatable and trackable process by which we can make code quality assessments. We can find out the “who did what, when, where, how, and why” of our code. It is a useful tool in incident response. Imagine a world in which our production environments are run entirely by code.
In order to have a world run by code, we must first create new types of developers. We are probably already using them, but not in an official capacity. We need a world in which we can automatically determine who is best for particular new development jobs. These jobs all start with what we have today. We start with the folks who are administrators, developers, Q/A, security, and operations. Next, we determine who is best to fill the new “as Code” roles: Infrastructure/Operations as Code, Security as Code, and Testing as Code. We may even determine that some of our existing developers are better at one task than another.
This starts with analytics about code quality—the code quality we need and require in order to have a secure, high-performance, and usable environment. The question to ask before changing everyone’s role is “How do we change a ‘gut feeling’ about code quality to real, usable numbers?” To determine the answer, we need analytics, but we also need better streams of data. Metrics about the number of bugs closed (without knowing the number of bugs introduced) is no longer a useful metric. Metrics that track developer behavior are not metrics that track code quality. We really want to know several items about code:
- Did we introduce new logic issues?
- Did we introduce new security issues?
- Did we introduce new performance issues?
- Did we introduce new costs?
These are harder questions to answer. However, it might be possible to answer them if we introduce some new ways of gathering data and, in the meantime, introduce security to the Agile and DevOps pipelines. Many companies do these things in part, but they are focused more on orchestrating the last mile than in the preceding functionality. The goal is to have a 100% automated set of parallel pipelines all working to improve code quality and make better use of human time.
We can do this by tying together data that occurs at every junction of a pipeline. It all starts with static analysis: the analysis of code as it enters the repository for simple-to-find issues, such as API key leakage. API key leakage into public or even private repositories leads to significant unplanned costs. Static code analysis could be run continually as well to uncover security and performance-related logic issues. Logging the results into an analytics tool such as Splunk or Elasticsearch will give you a pretty good idea of initial code quality.
Next, we introduce dynamic analysis into the pipeline. Dynamic analysis takes the bits integrated into a build and any artifacts produced and runs a series of tests against them. Ensuring that the proper libraries and codes are used by the build process is also important. A case in point is the determination of which SSL libraries are in use. Some have serious flaws, others may not be allowed by policy, and still others may not be known (due to being too new). We need to know what makes up our application and what was changed where, how, and by whom. Use of unauthorized components leads to additional costs, security concerns, and performance issues as well. Blackduck and other tools perform these actions. This information should also be also tracked and logged.
Lastly, we add in some policy analytics and orchestration. The decision to integrate, deploy, or roll back should be automated, repeatable, and tracked at every corner. In addition, any current in-production security monitoring, 0-day testing, and controls should still exist. The results should be pulled into a policy (analytics) engine together with the results of continual testing, and other analysis. Then, a decision can be made on rolling out code based on code quality.
Repeatability takes automation. Decisions that are automated become easier to track back when an incident occurs. Repeatability and logging allow us to track back incidents to code, providing another path for forensics, one that is becoming harder and harder to do. Repeatability also gives us a way to measure code quality, as we are automatically doing the same actions over and over while improving those actions over time. This where DevOps comes in.
Share this Article:
Latest posts by Edward Haletky (see all)
- Scale and Engineering - March 23, 2017
- SDS and Docker: The Beginnings of a Beautiful Friendship - March 21, 2017
- Security Operations Center: Not Just Visibility - March 14, 2017