At Dell Storage Forum 2012, Dell introduced a new converged infrastructure that features an Equallogic Array that takes up 2 slots of a new blade enclosure. Moving storage closer to the workloads running within the blades. This is a very interesting and powerful play by Dell, but I kept asking myself is this really a converged infrastructure? Or it is just an integrated blade enclosure that others have at this time?
This depends on what you mean by converged. At one side is what I would call just a bunch of devices and the other side where everything is fully integrated from the views seen by the network operations center, monitoring, security, and management through to the hardware. Convergence is along a continuum that does not include ordering a single SKU (and is not about packaging), but is about how things are integrated.
Blade’s naturally present one form of converged infrastructure yet Dell has the PowerEdge-C enclosures that also present another form of converged infrastructure but is missing integrated networking. But even so, is just one spot upon the continuum of convergence. The new blades with an Equallogic Array is further along this continuum. Now if we add the Dell management tools to this new hardware, you move further along the continuum towards total convergence, but what makes up the axes of this continuum?
- compute, storage, and network hardware integration
- device and hypervisor management integration
- security, monitoring, and adjunct software integration
- Workload management, deployment, and performance or other workload tuning integration
In one of the coolest keynotes I have seen, Dell presented an “architecture on the fly” of a future that would move Dell products even further along this continuum. This architecture makes use of their recent purchase of RNA Networks as a global host cache.
Non-Volatile Memory Cards (NVM) and a Global Fluid Cache where reads go through the nearest NVM cache in the cluster (which could be on the same machine). On write, the data is written to one NVM and then mirrored to the closest neighbor then written to storage from the NVM. This extends the Compellent tiering into the servers. Write acknowledgement is done at the server not the array controller. In some cases the nearest neighboring NVM may not be presented the LUN from the Compellent storage due to current presentation and zoning. In this case, behind the scenes the LUN would be presented so that the write can be completed.
This presents an interesting set of security concerns as we may now mix data between trust zones. So perhaps we would need to setup fluid clusters by trust zones or integrate a bit of multi-layer security that changes the nearest neighbor concept to be one within a specific trust zone, and if none is in the trust zone, write directly to storage bypassing the cache except for read. However, if storage is encrypted within the virtual machine, it would make no difference which NVM did the write as the data would be encrypted before it hits the NVM and its nearest neighbor. So for high security data, encryption within a VM would still come into play.
Dell and other companies present an interesting view of converged infrastructures and each have their own definitions. What else would you add as a key component of the converged infrastructure continuum definition? Where would you place Dell vStart, VCE VBlock, Flexpod, and others along this continuum?