June 16, 2014 – Today, SanDisk agreed to acquire Fusion-io in an all-cash deal that values the data storage company at about $1.1 billion. SanDisk will fund the acquisition with cash held on its balance sheet, and it is expected that the deal will finalize in the company’s fiscal third quarter.
Salt Lake City–based Fusion-io is a manufacturer of flash memory products and software used in data centers, while Milpitas, California–based SanDisk makes flash memory cards, such as the microSD cards that store files on smartphones and tablets. This acquisition will give SanDisk the chance to focus more on businesses as customers and to move away from what are perceived to be low-margin consumer goods.
“Fusion-io will accelerate our efforts to enable the flash-transformed data center, helping companies better manage increasingly heavy data workloads at a lower total cost of ownership,” SanDisk CEO Sanjay Mehrotra said in a statement.
SanDisk has stated that it will make a tender offer for all shares outstanding of Fusion-io for $11.25 a share. This is a 21% premium to Friday’s closing price, not a tardy piece of margin for Fusion-io investors and for a company that has been trading at a loss for the last five quarters in a row.
On the whole, this is a shrewd acquisition by SanDisk, as it will give it a valuable entry door into the data center, a market space that is burgeoning. Fusion-io, with its flash card, is poised to take a significant share of the flash acceleration market. With SanDisk’s FlashSoft product and a full solution to IO acceleration, SanDisk will be in direct competition with the current incumbents, which only concentrate on the vSphere space. FlashSoft already runs on multiple operating systems.
In the world of virtualization storage it seems all we talk about lately is flash and SSD. There is a good reason for that. Traditionally, storage capacity and storage performance were directly linked. Sure, you could choose different disk capacities, but in general you needed to add capacity in order to add performance because each disk, each “spindle” could only support a certain number of I/Os per second, or IOPS. This was governed by the mechanical nature of the drives themselves, which had to wait for the seek arm to move to a different place on disk, wait for the seek arm to stop vibrating from the move, wait for the desired sector to rotate underneath the read head, etc. There’s only so much of that type of activity that can be done in a second, and in order to do more of it you needed to add more drives. Of course that has drawbacks, like increased power draw, more parts so more chance of failure, and increased licensing costs since many storage vendors charged based on capacity.
Flash memory takes most of what we know about the physics of storage and throws it away. Because there are no moving parts, the act of seeking on a solid state disk is a completely logical one. There are no heads, no sectors, no rotation speeds. It’s all the speed of light and however fast the controller can go. As such, flash memory can do enormous numbers of IOPS, and if implemented well, it decouples storage performance from storage capacity. You save power, you save data center space, you save money in licensing fees, and your workloads run faster.
One sure way to improve performance is to cache the non-dynamic data of any application. We did this to improve the overall performance of The Virtualization Practice website. However, there are many places within the stack to improve overall performance by caching, and this got me to thinking of all the different types. At the last Austin VMUG, there were at least three vendors selling caching solutions that were designed to improve overall performance by as little as 2x to upwards of 50x improvements. That is quite a lot of improvement in application performance. Where do all these caching products fit into the stack?