Institutional knowledge is leaving companies at a rapid rate. Employees are very mobile, moving between companies fairly rapidly. Just as they learn something important, they are out the door. That knowledge is not always transferred to others staying behind. Here one day, gone the next. How can you explain a business decision, technology decision, or any other decision without information? Architects, developers, and business folks should be writing documents to cover all major decisions, but these happen long after the decisions have been made. We lack the reasons behind the decisions, the original questions asked, and all the work leading up to the decisions. We do not want to lose institutional knowledge. Now, into this breach comes a new set of tools.
Transformation & Agility
Transformation & Agility concerns the utilization of the technical agility derived from the benefits delivered by virtualization and cloud computing, coupled with Agile Development practices that improve business agility, performance, and results. This includes the agility derived from: (Read More)
- The implementation of Agile and DevOps methodologies
- The application and system architectures
- The implementation of IaaS, PaaS, and SaaS clouds
- Monitoring of the environment, coupled with processes for resolving problems quickly
- Having continuous availability through the use of high-availability and disaster recovery products and procedures
Transformation covers the journey from A to Z and all points between: how you get there and the roads you will travel; how decisions made on day zero or one, or even day three, will affect later decisions; and what technical, operational, and organizational pitfalls can be associated with an implementation. We examine what tool sets are required for Agile Cloud Development, and it delves into other aspects of Agile Development that integrate with cloud computing, SaaS, and PaaS environments, including DevOps, Scrum, XP, and Kanban.
Innovation is a critical part of any business, particularly a software business. However, as we know from Clayton M. Christensen’s book The Innovator’s Dilemma, it is hard to innovate in a large company. The challenge is that many innovations will disrupt the existing revenue stream. But without innovation, the revenue stream will inevitably end. To remain a viable business, innovation needs to be fostered and adopted, even at the risk of short-term self-disruption. One way a growing company can remain innovative is by encouraging engineering teams to innovate through hackathons. A hackathon is a short period, usually twenty-four hours, during which a group of developers collaborates to write some software very fast. The aim is a high-energy drive to proof out ideas or build a rapid prototype. The events usually run on a diet of caffeine and pizza. The hackathon participants each bring their own ideas, and the group together decides which ideas to pursue. The developers form their own small, temporary teams to work on their chosen ideas. At the end of the hackathon, each team reports to the whole group on its idea and the progress it was able to make. This type of brief but intense activity is invigorating for the creative side of software development. Participants typically work all night with few breaks in order to build as much of the idea as possible. This rapid development of a new idea is usually a welcome break from the normal software development processes of bug fixing and QA testing.
How do you distribute an application that uses containers? This seems to be an odd question. Container-based applications are usually associated with Software as a Service (SaaS) applications and public cloud deployment. However, there is still a place for software that is purchased and installed on-premises in a data center. If the software is in the form of containers that will run inside the customer’s data center, then how will the software be deployed and managed? How will scaling work, and how will updates be deployed?
There has been a lot of discussion recently about whether forking Docker makes sense. Driving this discussion are complaints from the Docker community and ecosystem about the speed at which Docker is releasing software and the perceived quality of those releases. Unless you have been hiding under a rock lately, you know that Docker is one of the most popular open-source projects in the world. Docker’s rise from a concept to a dominant force in the industry is a story for the ages. As Docker and containers continue to gain adoption in both non-production and production environments, vendors have been flocking to provide services that support or enhance Docker containers.
Moving up the stack does not simplify anything. Complexity increases. Let us look at this from several angles: management, security, development, networking, and storage. In essence, the entire IT stack. Because complexity increases, we need DevOps (or SecDevOps) to help us over the rough spots. We need new rules of engagement and even new ways of working. This makes the new IT stack even more complex.
Now that VMworld is over, it is time to digest everything we learned: to pick at the messaging for the kernel of truths and directions. Many found the VMworld keynotes to be somewhat bland and the show floor to be much of the same. However, there was gold within both. We can discuss the show floor later, as I’d like to look deeper at the messaging first. The gold was hard to put together amid all the different messages. Themes included cross-cloud, Photon, NSX, and VSAN. These may seem disjointed until you look deeper. The messaging could be better, and I expect it to improve by VMworld Barcelona. Yet, there was clearly a path forward for each of VMware’s customers.