Lock-In, Inertia, and Gravity

Network Virtualization

I recently saw a conversation about avoiding lock-in and the fact that it is impossible to avoid at least a degree of lock-in. Like most terms in IT, “lock-in” is one that means subtly different things depending on context. Its most common usage concerns vendors. When we complain about lock-in, we are usually complaining that once a vendor’s product is in use, it’s very difficult to mix and match it with similar products from other vendors.

Used this way, “lock-in” is usually a derisive term, one that should be avoided. Often it’s used by salespeople to dissuade customers from looking at their competitors. Fear of lock-in can put people off looking at products. Usually the mechanism behind lock-in is assumed to be both technical (which implies it is unavoidable) and intentional (so malicious). This mechanism can be anything from using proprietary protocols for communications to employing unique physical adapters. Stacking switches are a good example: it is impossible to mix switches from different manufacturers in a stack and still get the benefits of the stacking systems.

The argument I saw, however, had a much larger view than this. In order to implement any system, there is a basic learning curve that is necessary. This means that staff need to get to know the system and that time must be invested to that end. This leads to a second kind of lock-in in which it is possible to include a new technology or replace old with new technically, but there is resistance due to the need to learn new systems. This kind of lock-in is much more like the physical concept of inertia; in this case, a buildup of knowledge leads to a reluctance to change. This in turn leads to a consolidation around the original technology, and the more time passes, the more inertia the system gains. This increases as more cash is pumped into a system or technology. The capital invested is an obvious pull against the idea that a company should change to something new. Finally, there is always a political investment. All decisions require sign-off and persuasion. When a new direction is considered, are we saying the original decision was wrong? These things together can lead to a huge drag on progress—to a large amount of inertia—and we see that Newton’s laws may well apply. Systems won’t change direction without a big push!

However, if this is true—if the inertia is often not technical—then we can see that all systems have inertia. The questions are: “How big is the inertia? How much force do we need to use to escape it?” The inertia starts to look a lot more like gravity, almost as though systems have a sort of mass defined by their technical specification and the capital and political capital invested in them. This mass will vary from system to system. It will vary for the same system in different environments. An environment with a lot of sysadmins who are used to Windows, Active Directory, and Exchange would see a lot more push against a Linux system with Apache than an environment with a lot of BSD sysadmins would. We start to realise that we must consider the mass of a system and how (or if) we will want to escape it in the future. After all, we wouldn’t land on the moon without the fuel to launch us back off it again!

One way to reduce the effect of gravity is to jump first to a staging point. In real life, we did this in low Earth orbit (LEO) before moving on to the moon. In my metaphor, Amazon introduced AWS as a development environment and a place to expand into at launch. That gave companies the ability to build skill, to invest in training, and to prove the concept before moving in more wholesale. The development environment was a sort of LEO.

The cell-based model (or two-pizza model) is another way in which this works, reducing the inertia by investing in small amounts of effort that connect together. This is sort of like building the IIS by sending up modular components. We build a space station one proven section at a time, with small tasks that aren’t held as tightly by gravity’s pull. Once we have the station, we have yet another staging post, farther away from that huge monolith we started on, but now with much less gravity to hold us. We are much more agile. Importantly, we build in steps that can fail (relatively) painlessly.

Posted in SDDC & Hybrid CloudTagged , ,