In the world of virtualization storage it seems all we talk about lately is flash and SSD. There is a good reason for that. Traditionally, storage capacity and storage performance were directly linked. Sure, you could choose different disk capacities, but in general you needed to add capacity in order to add performance because each disk, each “spindle” could only support a certain number of I/Os per second, or IOPS. This was governed by the mechanical nature of the drives themselves, which had to wait for the seek arm to move to a different place on disk, wait for the seek arm to stop vibrating from the move, wait for the desired sector to rotate underneath the read head, etc. There’s only so much of that type of activity that can be done in a second, and in order to do more of it you needed to add more drives. Of course that has drawbacks, like increased power draw, more parts so more chance of failure, and increased licensing costs since many storage vendors charged based on capacity.
Flash memory takes most of what we know about the physics of storage and throws it away. Because there are no moving parts, the act of seeking on a solid state disk is a completely logical one. There are no heads, no sectors, no rotation speeds. It’s all the speed of light and however fast the controller can go. As such, flash memory can do enormous numbers of IOPS, and if implemented well, it decouples storage performance from storage capacity. You save power, you save data center space, you save money in licensing fees, and your workloads run faster. Continue reading SanDisk FlashSoft for VMware vSphere→
Keeping in mind that the best server and storage IO is the one that you do not have to do, then second best is that which has the least impact combined with best benefit to an application. This is where SSD, including DRAM- and NAND-flash-based solutions, comes into the conversation for storage performance optimization.
As virtualization slowly takes over almost everything in information technology, certain things need to change. One of those things is the way storage operates. Traditional enterprise storage was built for a time when physical machines were king, and there was only one operating system, and often only one workload, per physical server. Virtualization changes that, putting multiple workloads and multiple OS images on a single host, often causing predictive algorithms for caching to fail because the I/O from a particular server looks almost completely random (sometimes referred to as the “I/O blender”). In fact, the I/O isn’t random, it’s just the result of multiple VMs each doing their own thing. Most monolithic storage vendors have adapted their arrays to better understand this new type of I/O, at least in part. However, there is a whole new class of storage company that is looking to start over, upending the storage market by pairing commodity hardware with deeper understandings of virtual environments and new management models. Continue reading Tintri OS 2.0 & ReplicateVM→
Converged infrastructure comes in many forms. Some vendors put a bunch of discrete hardware together on a pallet and call it “converged.” Others think that their use of iSCSI or FCoE means they’ve got converged storage. Yet the real holy trinity of convergence is when a vendor converges compute and storage resources on three fronts: acquisition, implementation, and ongoing management. This is where Nutanix operates. Continue reading A Look at the Nutanix NX-3000 Virtual Computing Platform→
One sure way to improve performance is to cache the non-dynamic data of any application. We did this to improve the overall performance of The Virtualization Practice website. However, there are many places within the stack to improve overall performance by caching, and this got me to thinking of all the different types. At the last Austin VMUG, there were at least three vendors selling caching solutions that were designed to improve overall performance by as little as 2x to upwards of 50x improvements. That is quite a lot of improvement in application performance. Where do all these caching products fit into the stack? Continue reading Caching throughout the Stack→