VMware’s Management Strategy continues to evolve, both on its own and as a part of the vCloud Suite. At VMworld in Barcelona, VMware made an important series of announcements that both clarified strategy and demonstrated significant progress in some important areas. Continue reading VMworld 2013 Update: VMware’s Management Strategy
Everybody in IT knows by now that flash memory is redefining the enterprise storage industry, mostly by decoupling performance from capacity. Most storage vendors are happy to just add flash to their existing product lines, often using it as cache, or as a storage tier handled transparently within the array. Few vendors take the opportunity to rethink the way storage works, though, from the basics of performance to how it meshes with the idea of public & private clouds. Coho Data, coming out of stealth mode with their first product, the DataStream, does just that. Continue reading Coho Data DataStream
Ask any virtualization administrator what their major pain points are and the first thing on the list will be storage. It isn’t surprising. Storage was likely the first major bottleneck for virtualization, back when it was “the Internet” and not “the cloud.” And as any IT person can tell you, there are two ways storage can be a bottleneck: performance and capacity. Traditionally, the problem of capacity is less complicated to solve than that of performance. To gain capacity you just add disk. To gain performance you needed to select a disk form factor (2.5″ or 3.5″), connection technology (SAS, iSCSI, fibre channel), rotational speed (7200, 10000, 15000 RPM), sometimes a controller (do I get the Dell PERC with 512 MB of cache or 1 GB?), and do the math to figure out how many disks you need to match both the problem of your I/O and its corollary: the problem of your budget. Complicating things, virtualization turned most I/O into random I/O. What might be a nice sequential write from each virtual machine looks pretty random in aggregate. Of course, random I/O is the hardest type of I/O for a disk to do. Continue reading Caching as a Service
Cisco announced today their intent to acquire Whippany, NJ based WHIPTAIL, a manufacturer of Solid-State Disk (SSD) storage. The strategy for Cisco is to provide a “converged infrastructure including compute, network and high performance solid state that will help address our customers’ requirements for next-generation computing environments,” said Paul Perez, vice president and general manager, Cisco Computing Systems Product Group. Continue reading News: Cisco Intends to Acquire SSD Pioneer WHIPTAIL
At the US VMworld 2013 conference, VMware did an excellent job of explaining how network virtualization and storage virtualization were going to work. Adding network virtualization and storage virtualization to the existing virtualization of compute (CPU and memory) along with API’s and policies to manage the whole thing is what creates a software defined data center. Continue reading VMworld 2013 Wrap Up – The Software Defined Data Center
By Greg Schulz, Server and StorageIO @storageio
Keeping in mind that the best server and storage IO is the one that you do not have to do, then second best is that which has the least impact combined with best benefit to an application. This is where SSD, including DRAM- and NAND-flash-based solutions, comes into the conversation for storage performance optimization.
The question is not if, but rather when, where, what, and how much SSD (NAND flash or DRAM) you will have in your environment, either to replace or to complement HDDs. Continue reading When and Where to Use NAND Flash SSD for Virtual Servers