One of the main goals of DevOps is to streamline the software development lifecycle (SDLC) by removing waste from the system. Waste is often found in the form of bottlenecks, things within the system the slow down forward progress and introduce unnecessary wait time or tasks. This waste can be caused by inefficient processes, technology issues, and organizational or people issues. Successful companies are able to look at the entire value stream to identify the waste and then systematically work on reducing that waste from the SDLC to continuously improve, resulting in better speed to market, improved quality, and higher reliability. Companies the can continuously improve in this fashion become high performing companies which often results in improved customer satisfaction, better productivity, and improved financial results. This is the ultimate dream of the C-level types who are looking to transform their companies with DevOps. Continue reading DevOps and Bottlenecks
As technologists and analysts for the virtualization and cloud spaces, we are always talking about various places within the IT stack. Actually, as we talked about within the article Technical Arc of Virtualization, we have noticed that many people are moving up the IT stack, forming new and more interesting substrates of IT. These substrates are used to simplify the actions one takes to deploy new and more interesting applications, while at the same time abstracting away the physical and virtual layers of the stack—in essence, forming new substrates on top of which to build. Continue reading The Substrates of IT
As companies embrace the DevOps movement, they rely heavily on automation to improve the time to market for new features and services. DevOps is a long, never-ending journey with a goal of continuously improving the software delivery process, resulting in better products and services and, ultimately, happier customers. At the beginning of their DevOps journies, many companies focus on continuous integration (CI), in which they automate the build process. Automated testing is implemented so that builds will fail if any changes fail the baseline tests. The idea here is to never move bugs forward, catching them early in the process.
Puppet Labs has published its annual State of DevOps report, and it is loaded with interesting information as always. Last year’s report brought home the point that DevOps was becoming widely accepted in the enterprise. This year’s report further validates that point and provides us with some interesting insights from surveying a wide variety of companies in different phases of their DevOps journey. Continue reading The State of DevOps
I had the pleasure of recording a podcast recently with Battery Ventures Technology Fellow Adrian Cockcroft. Adrian is well known from his days at Netflix and can frequently be seen at major conferences presenting on DevOps, microservices, and cloud computing. Last month, both Adrian and I attended DockerCon in San Francisco. Our conversation started with a discussion about Docker. Continue reading Podcast with Adrian Cockcroft
I had the opportunity to attend Red Hat Summit and DevNation. Nearly every answer to any question at both these events was to “use containers” to solve that problem. While some responses were undoubtedly true, others were not quite as completely true. Yes, you can use containers to solve many problems, but what was often overlooked were the underlying bits of infrastructure necessary to provide the base for the containers. Overall, Red Hat Summit delivered on its promise; I will follow up about DevNation at a later time. Continue reading RedHat Summit: All about Containers