As computing, storage and networking continues migrating from physical equipment to virtual environments, provisioning and managing them increasingly relies on software. One of the most important benefits of virtualization and cloud computing is the idea infrastructure as code, using code, automate the provisioning and managing of entire application environments and ecosystems (infrastructure) reliably on demand. This idea, in conjunction with the agile development practices, is what makes continuous delivery possible for leading companies like Facebook, Flickr, Etsy and others.
In agile teams today, developers writing applications partner with DevOps team members writing, testing, building and packaging infrastructure as code with tools including Puppet Labs, Chef and Cucumber. This is helping traditionally bottlenecked and silo-od infrastructure teams begin working more closely with the application teams and improve overall business agility. DevOps team members write infrastructure code alongside the application under development – all leveraging popular development practices including code management, continuous integration, release automation and test-driven development.
Infrastructure as a Bottleneck
In the not-so old days when a development team needed new application environments (development, staging, production, etc.), they would have to go through detailed analysis of estimated performance and storage capacities before making purchase decisions. They’d plug numbers into spreadsheets and hopefully order the right amount of hardware to meet their needs, typically including more than currently needed for future business growth. For large organizations, they might buy in bulk so hardware is only ordered once a quarter to save money.
After a month or so the servers arrive on pallets at the data center and get racked and wired, the operating systems installed and configured, supporting ecosystems configured, tested and finally rolled out to the team for use. This process typically requires many manual steps, effort and coordination across different specialists and teams. The results are long lead times on new infrastructure, a fact of life that is unfortunately still the norm in many organizations, especially large ones.
Even for my clients who are large IT organizations using virtualization, 6+ weeks for a new agile team to get a test environment configured and loaded with test data is not uncommon. This is not because the technology doesn’t exist to automate most all of this process. Rather, silo-ed teams overloaded with requests are not setup to work together efficiently and effectively. They are often so behind in work they have scant time to invest in creating methods and tools to help their internal customers provision and manage virtualized environments.
Infrastructure as Code: Self Service
Cloud computing, whether public or private, enables virtual infrastructure to be created and managed on-demand by authorized personnel using a web console or code API’s. What once was a bottleneck can now become a service provided to agile teams for managing their environments, thus off-loading work from operations and reducing time to market. Both web console and code API’s have advantages and can be used either separately or together.
The advantages of web management consoles such as EC2 or Azure are that it’s fairly intuitive for a member of an agile team, such as the tech lead or senior developer, to create their own environments via point and click. There are decisions to make along the way, but in my experience any competent technologist can quickly figure out how to create, start, stop and destroy computing environments rather quickly. In my experience, this typically includes application developers, build or release engineers and progressive system admins with additional skills and interests in collaboration, automation and quality (aka DevOps team member).
Web management works well when there’s a defined group of people who are generally available to create environments and demand is fairly low. In my consulting company we’ll create a few environments a week in support of our projects, so a console works fine for our needs. However, this approach does not scale and can quickly become a bottleneck if new environments are needed frequently. Agile teams using release automation and continuous delivery will quickly outgrow the manual processes of managing infrastructure using a web console.
Infrastructure as Code: Scaling
The scalable way to setup new environments is using code that can be tested, automated and run on demand. This infrastructure code, versioned in a code repository alongside the application code, enables the entire application stack to be provisioned on demand. This infrastructure code typically creates the machine instance including virtual server, installing all the supporting software, deploying the application and validating the entire stack is setup correctly.
Tools such as Puppet and Chef make it straight forward for DevOps team members to write code that creates their virtualized application environments using cloud API’s instead of a web console. When combined with automation and hooked up to continuous integration, the entire virtual infrastructure can be destroyed and recreated on-demand or on-schedule, whichever is preferred.
For many teams, this is a significant achievement – the ability to create a new application environment with all supporting software properly configured with a single command on demand or on schedule. Yet the most progressive teams are taking it one step further: test-driving their infrastructure.
Infrastructure as Code: Test-Driven
Analogous to how developers package unit tests with their application code, DevOps team members can write infrastructure automated tests to validate everything is setup correctly. Teams write tests in tools such as Cucumber that assert aspects about the newly created environment are true. Typically this includes validating what software is installed, where it is installed, who has access to it, whether it is currently running and how it is configured. The infrastructure test code written in Cucumber is versioned with the infrastructure code written in Puppet or Chef forming a matched pair. Once versioned alongside the rest of the application, these infrastructure tests can now be run as part of the continuous integration process.
With our automated infrastructure tests written in code, we can apply one additional agile practice: test-driven development or TDD. This means before we modify our environment to install some new software or make any other change, we can write a test that asserts certain conditions are true and run that test in advance of writing the code.
While the updated tests will fail the first time, now we have a concrete way of knowing when we are done (when the tests pass). Now we focus on modifying the infrastructure code to make them pass and when done, commit our changes. We work in this “test-first” approach of small increments, supported by running tests frequently to ensure we don’t get off track.
Not only do we have a set of tests that validate quality when we are complete, but working in this method improves quality and productivity because quality is kept high throughout the process. Agile developers using TDD learned this long ago. TDD also ensures we can correctly and repeatedly setup infrastructure while the automated tests capture the requirements in a testable specification.
Computing, storage and networking continues the migration towards infrastructure as code or software-driven environments. Whether in a private data center or public cloud, agile development practices honed over the last decade can help IT organizations transition from bottlenecks to partners in application development. When infrastructure is managed with code, continuous integration, automated testing, automated releases and test-driven development are integral practices for faster time to market, business scalability, higher quality and productivity.
Share this Article: