Inventing Complexity


Recently, a number of marketing campaigns have seemed to be inventing complexity to try to give products the appearance of having some sort of competitive advantage. The invented complexity involves real-world items that many folks just do not use, or even care about, in order to make products look like something different. We have spoken about in-kernel vs. VSA in the past, but now we are seeing invented complexity within the mainstream storage world.

Inventing complexity based on actions or efforts not normally chosen in order to claim that your product simplifies that complexity, but ignoring the real complexity within any solution, is misleading at best. Let us take the following items. How many of you do these items by hand these days, or even worry that they are complex?:

  • Load balance controllers
  • Aggregate controllers
  • LUN masking
  • NPIV
  • Manage extents

These are all items we have been told for years to not worry about—not to doas they could impact performance. So, most of these items are not in use within many data centers. NPIV may be the exception, but it is more for reporting than other items.

So, a tool that simplifies these actions, when they are not even used, could be considered to make things less complex. But that also implies that the solution may be more complex in many other areas. We do not need to invent complexity to show an advantage. These same tools require that the hardware compatibility list be followed carefully and that the number of nodes involved be a specific number. Further, there is more software to integrate to manage the complexity. Yes, that software has a very nice user interface that seems simpler, but simplicity often hides complexity. The less for you to change, the simpler things seem.

However, within the storage world, differentiation is not based on unused features or capabilities, nor is it based on IOPs (at least, IOPs based on unusually small block sizes); instead, it is based on latency. Anything that can reduce latency from the workload through to the storage is a clear winner. To do that, you need a fairly complex system, one based not only on host-side caching, but on fast interlinks to storage and fast storage response.

Moreover, to build a low-latency solution requires complexity, and it requires concentrating on the data path. It is something that requires caching as close to the data service as possible. For all-flash arrays, that also implies a server-side caching layer. This same technology will work for cloud-based storage solutions. However, all these needs, these designs, are hidden under a layer of simplicity over complexity. The only time we see them as complex is during setup, when we have to crack the box to add in the hardware necessary to make things work.

This is why hyperconverged systems (HCI) are so very popular.  The complexity is removed in favor of a single SKU to make a purchase with a guaranteed level of behavior. Yet, to build the same thing, the same companies claim, is too complex. This makes no sense to me. In order to analyze a problem, I often have to understand what is happening within the system, to trace data through the system in order to find the issue. If I cannot understand how the system is built, that trace does not work very well.

So, simple solutions often hide complexity. This is good, but we need to understand that complexity in order to debug the system: that is a necessity. We do not need to invent complexity when it already exists even though we cannot see it below the covers of a very simple user interface.

HCI systems are complex, but the complexity of building them has been removed. That also means important knowledge is not transferred to the user. This in turn means better support services are required, as knowledge has been outsourced to remove complexity.

This requires a level of trust in your vendor. Just as you need to trust your automation, when complexity is abstracted away, you need to trust what is below your user interface: the tool you interact with on a daily basis.

We do not need to invent complexity. It exists around us in IT; we just consume IT resources very differently today. This trend will continue. However, new chipset features, new form factors, and new hardware solutions are coming that will benefit the cloud, the virtual environment, and the next generation of applications.

Where are you within this wave of simplification?

Share this Article:

The following two tabs change content below.
Edward Haletky
Edward L. Haletky, aka Texiwill, is the author of VMware vSphere(TM) and Virtual Infrastructure Security: Securing the Virtual Environment as well as VMware ESX and ESXi in the Enterprise: Planning Deployment of Virtualization Servers, 2nd Edition. Edward owns AstroArch Consulting, Inc., providing virtualization, security, network consulting and development and The Virtualization Practice where he is also an Analyst. Edward is the Moderator and Host of the Virtualization Security Podcast as well as a guru and moderator for the VMware Communities Forums, providing answers to security and configuration questions. Edward is working on new books on Virtualization. [All Papers/Publications...]
Edward Haletky

Latest posts by Edward Haletky (see all)

Related Posts:

Leave a Reply

Be the First to Comment!