Infinio is a Boston-based company that has a very interesting play on flash acceleration. Having recently sat through a briefing with its representatives, I can say that Infinio’s vision and future are bright.
Why am I saying this? I have got to admit that with this company, I have been laboring under a misconception. I had thought that it was just another flash acceleration company. I mean, it handles read acceleration of vSphere-presented storage, doesn’t it? Well, yes it does, and like all the flash acceleration–based companies, Infinio handles it well.
On the financial front, Infinio is well funded, having raised circa $24M in three rounds of funding. Here is an interesting point: rounds A and B were funded by the same set of VCs. This is quite rare and shows that Infinio’s backers believe in the direction the company is moving.
So, what is the secret sauce?
Basically, the Infinio product sits between the traditional storage layer and the compute layer. It accelerates the performance of the virtual machines by utilizing a section of a compute node’s system RAM to run the machines and also to de-dupe them for extra capacity.
Here, there is a departure from the standard paradigm. Infinio takes the RAM from each node and combines it all into a pseudo-datastore that is shared across the machines in the Infinio accelerated cluster. What this gives Infinio is an ability to move VMs by vMotion without any of the traditional downsides of loss of acceleration. The graphic below shows the compute nodes sharing their RAM in a virtual datastore.
What this means is that Infinio can service cached read requests from a local node or from a remote cache on another node in the cluster. The graphic below better illustrates the relationship of data and storage. Each node keeps a copy of its cached data and pointers to the shared data on the other nodes.
Another impressive feature of Infinio is the fact that its installation is seamless to the environment. Just deploy the accelerator VMs and the management VM on the machines in the cluster, and the cache starts to work immediately, thereby allowing offload to commence. What is more impressive is that, due to the fact that the cache is shared between all machines in the cluster, vMotion works without crippling the VM on migration because it needs to recreate the read cache.
Even more impressive, if for some reason I need to disable the acceleration, I just switch off the accelerator VM, and that host is now talking directly to the datastore again, without the caching engine.
There is one potential market access limitation: this product currently works only on vSphere, and it can only accelerate NFS-based datastores. That said, it is not that much of a limitation, given the market share that VMware vSphere has in the virtualization space. More of a worry is the product’s reliance on NFS. A large majority of deployments have traditionally been on block-based storage, either Fibre Channel or iSCSI. But let us look at this out of the box. There is one particular portion of the virtualization space that utilizes NFS mounts to a greater effect, and that is the desktop market, be that VDI or DaaS. Here, Infinio could have stolen a march.
So, Infinio does not yet accelerate block storage, but that does not matter in the slightest. It has a niche, and it is poised to exploit it fully. Infinio’s go-to-market strategy is unique, and its whole sales strategy is more akin to Amazon’s and Google’s than to traditional IT sales strategy. Go to Infinio’s website, and you can just download the application and start to use it. You do not need your great-grandmother’s inside leg measurement to obtain it, either. I like this: it is clean, it is open, and I can download, install, test, and remove all without a server reboot or a pesky sales call. All in all, I can say that Infinio is a breath of fresh air, and its hire of Scott Davis from VMware is a good indicator of its intention to seriously play in this market space.