Welcome to The Virtualization Practice’s week-long coverage of VMworld US 2015. Tune in all week for our daily recap of the major announcements and highlights from the world’s premier virtualization and cloud conference.
With all the forward-looking business out of the way (see the Day 1, Day 2, and Day 3 recaps), VMworld took a breath yesterday and focused on other parts of the ecosystem. The first annual Developer Day was held as part of the VMworld DevOps program track, and it included a Hackathon where coders and non-coders could compete for prizes. Non-coders had a series of increasingly difficult challenges to complete. Coders worked to create the most useful, creative, and complex tools and services on vCloud Air, judged at the end of the day, and were awarded prizes like a guitar signed by Alabama Shakes and the Neon Trees, the VMworld Party bands. Continue reading VMworld US 2015: Day 4 Recap→
June 16, 2014 – Today, SanDisk agreed to acquire Fusion-io in an all-cash deal that values the data storage company at about $1.1 billion. SanDisk will fund the acquisition with cash held on its balance sheet, and it is expected that the deal will finalize in the company’s fiscal third quarter.
Salt Lake City–based Fusion-io is a manufacturer of flash memory products and software used in data centers, while Milpitas, California–based SanDisk makes flash memory cards, such as the microSD cards that store files on smartphones and tablets. This acquisition will give SanDisk the chance to focus more on businesses as customers and to move away from what are perceived to be low-margin consumer goods.
“Fusion-io will accelerate our efforts to enable the flash-transformed data center, helping companies better manage increasingly heavy data workloads at a lower total cost of ownership,” SanDisk CEO Sanjay Mehrotra said in a statement.
SanDisk has stated that it will make a tender offer for all shares outstanding of Fusion-io for $11.25 a share. This is a 21% premium to Friday’s closing price, not a tardy piece of margin for Fusion-io investors and for a company that has been trading at a loss for the last five quarters in a row.
On the whole, this is a shrewd acquisition by SanDisk, as it will give it a valuable entry door into the data center, a market space that is burgeoning. Fusion-io, with its flash card, is poised to take a significant share of the flash acceleration market. With SanDisk’s FlashSoft product and a full solution to IO acceleration, SanDisk will be in direct competition with the current incumbents, which only concentrate on the vSphere space. FlashSoft already runs on multiple operating systems.
In the world of virtualization storage it seems all we talk about lately is flash and SSD. There is a good reason for that. Traditionally, storage capacity and storage performance were directly linked. Sure, you could choose different disk capacities, but in general you needed to add capacity in order to add performance because each disk, each “spindle” could only support a certain number of I/Os per second, or IOPS. This was governed by the mechanical nature of the drives themselves, which had to wait for the seek arm to move to a different place on disk, wait for the seek arm to stop vibrating from the move, wait for the desired sector to rotate underneath the read head, etc. There’s only so much of that type of activity that can be done in a second, and in order to do more of it you needed to add more drives. Of course that has drawbacks, like increased power draw, more parts so more chance of failure, and increased licensing costs since many storage vendors charged based on capacity.
Flash memory takes most of what we know about the physics of storage and throws it away. Because there are no moving parts, the act of seeking on a solid state disk is a completely logical one. There are no heads, no sectors, no rotation speeds. It’s all the speed of light and however fast the controller can go. As such, flash memory can do enormous numbers of IOPS, and if implemented well, it decouples storage performance from storage capacity. You save power, you save data center space, you save money in licensing fees, and your workloads run faster. Continue reading SanDisk FlashSoft for VMware vSphere→
Ask any virtualization administrator what their major pain points are and the first thing on the list will be storage. It isn’t surprising. Storage was likely the first major bottleneck for virtualization, back when it was “the Internet” and not “the cloud.” And as any IT person can tell you, there are two ways storage can be a bottleneck: performance and capacity. Traditionally, the problem of capacity is less complicated to solve than that of performance. To gain capacity you just add disk. To gain performance you needed to select a disk form factor (2.5″ or 3.5″), connection technology (SAS, iSCSI, fibre channel), rotational speed (7200, 10000, 15000 RPM), sometimes a controller (do I get the Dell PERC with 512 MB of cache or 1 GB?), and do the math to figure out how many disks you need to match both the problem of your I/O and its corollary: the problem of your budget. Complicating things, virtualization turned most I/O into random I/O. What might be a nice sequential write from each virtual machine looks pretty random in aggregate. Of course, random I/O is the hardest type of I/O for a disk to do. Continue reading Caching as a Service→
There is a new set of tools available for Caching up and down the stack which we covered within Caching through out the Stack, however in reality where is the best place to cache data for your application and what are the ramifications of using such a cache. Recently, we had a caching problem, actually two of them. Both caused by the same thing, a lack of full understanding about what was being cached. For any application, the best way to cache is to cache in memory as close to the application stack as possible, which in our stack could be within the application, the OS, or even a hypervisor based disk cache. However, which does your application actually use? Continue reading Caching your Application, OS, or Storage→
One sure way to improve performance is to cache the non-dynamic data of any application. We did this to improve the overall performance of The Virtualization Practice website. However, there are many places within the stack to improve overall performance by caching, and this got me to thinking of all the different types. At the last Austin VMUG, there were at least three vendors selling caching solutions that were designed to improve overall performance by as little as 2x to upwards of 50x improvements. That is quite a lot of improvement in application performance. Where do all these caching products fit into the stack? Continue reading Caching throughout the Stack→