The tracking of user activities increases over the holidays. A user is either a device (car, phone, tablet, etc.), a person (shopping online or off), or a thing (a credit card, etc.) The goal is to offer the users and the owners of the devices and things opportunities to buy more, to shop more, to be tracked more. It is an opportunity, one which normal people cannot avoid. A recent holiday saw the same things coming into my mailbox that show up every Monday, but each web visit led somewhere else.
Transformation & Agility
Transformation & Agility concerns the utilization of the technical agility derived from the benefits delivered by virtualization and cloud computing, coupled with Agile Development practices that improve business agility, performance, and results. This includes the agility derived from: (Read More)
- The implementation of Agile and DevOps methodologies
- The application and system architectures
- The implementation of IaaS, PaaS, and SaaS clouds
- Monitoring of the environment, coupled with processes for resolving problems quickly
- Having continuous availability through the use of high-availability and disaster recovery products and procedures
Transformation covers the journey from A to Z and all points between: how you get there and the roads you will travel; how decisions made on day zero or one, or even day three, will affect later decisions; and what technical, operational, and organizational pitfalls can be associated with an implementation. We examine what tool sets are required for Agile Cloud Development, and it delves into other aspects of Agile Development that integrate with cloud computing, SaaS, and PaaS environments, including DevOps, Scrum, XP, and Kanban.
I was fortunate enough to attend an invite-only Google event to get briefed on numerous announcements pertaining to Google’s cloud services. The announcements included updates on products ranging from Google Docs to Google’s public cloud offering. Additional information was shared on Google’s go-to-market strategy and staffing ambitions as it gears up to gain ground on AWS and Azure over the next few years.
Institutional knowledge is leaving companies at a rapid rate. Employees are very mobile, moving between companies fairly rapidly. Just as they learn something important, they are out the door. That knowledge is not always transferred to others staying behind. Here one day, gone the next. How can you explain a business decision, technology decision, or any other decision without information? Architects, developers, and business folks should be writing documents to cover all major decisions, but these happen long after the decisions have been made. We lack the reasons behind the decisions, the original questions asked, and all the work leading up to the decisions. We do not want to lose institutional knowledge. Now, into this breach comes a new set of tools.
Innovation is a critical part of any business, particularly a software business. However, as we know from Clayton M. Christensen’s book The Innovator’s Dilemma, it is hard to innovate in a large company. The challenge is that many innovations will disrupt the existing revenue stream. But without innovation, the revenue stream will inevitably end. To remain a viable business, innovation needs to be fostered and adopted, even at the risk of short-term self-disruption. One way a growing company can remain innovative is by encouraging engineering teams to innovate through hackathons. A hackathon is a short period, usually twenty-four hours, during which a group of developers collaborates to write some software very fast. The aim is a high-energy drive to proof out ideas or build a rapid prototype. The events usually run on a diet of caffeine and pizza. The hackathon participants each bring their own ideas, and the group together decides which ideas to pursue. The developers form their own small, temporary teams to work on their chosen ideas. At the end of the hackathon, each team reports to the whole group on its idea and the progress it was able to make. This type of brief but intense activity is invigorating for the creative side of software development. Participants typically work all night with few breaks in order to build as much of the idea as possible. This rapid development of a new idea is usually a welcome break from the normal software development processes of bug fixing and QA testing.
How do you distribute an application that uses containers? This seems to be an odd question. Container-based applications are usually associated with Software as a Service (SaaS) applications and public cloud deployment. However, there is still a place for software that is purchased and installed on-premises in a data center. If the software is in the form of containers that will run inside the customer’s data center, then how will the software be deployed and managed? How will scaling work, and how will updates be deployed?
There has been a lot of discussion recently about whether forking Docker makes sense. Driving this discussion are complaints from the Docker community and ecosystem about the speed at which Docker is releasing software and the perceived quality of those releases. Unless you have been hiding under a rock lately, you know that Docker is one of the most popular open-source projects in the world. Docker’s rise from a concept to a dominant force in the industry is a story for the ages. As Docker and containers continue to gain adoption in both non-production and production environments, vendors have been flocking to provide services that support or enhance Docker containers.