In Beware of the Franken-Monitor, we explained how many enterprises ended up with Franken-Monitors and the dangers associated with assuming that the present state of management tools can make the transition into the software-defined data center (SDDC) and the cloud. In Getting Rid of Your Franken-Monitor, we explained how to use green-field islands to put in place new ecosystem-based management stacks with the intent of eventually retiring your legacy management stacks. In this post, we detail how one could deploy one example of such an ecosystem of tools based upon Splunk and the vendors that comprise its ecosystem. Continue reading Replacing Franken-Monitors and Frameworks with the Splunk Ecosystem
Ten years ago, legacy management software vendors were busy building Franken-Monitors. Those Franken-Monitors now consist of legacy management offerings that are neither well integrated, nor in any way able to keep up with pace of innovation in the industry. In order to survive your transition to the software-defined data center and the cloud, you will need a management software strategy and a management software architecture that will allow you to keep up with the pace of change without buying or building a Franken-Monitor. Continue reading Beware of the Franken-Monitor
On October 17, the Wall Street Journal reported that IBM revenues have now declined for six straight quarters. IBM has told financial analysts that the company is capable of generating revenue growth in the low to mid single digits, but the fact is that IBM has not achieved that kind of growth since 2011. According to the report, IBM’s hardware revenue has fallen by 17%, with the hardware unit losing $167M, and the growth in the software business has gone from 4% to -1% (in other words, the software business has shrunk). Continue reading Are Market Dynamics Going to Kill IBM?
VMware’s Management Strategy continues to evolve, both on its own and as a part of the vCloud Suite. At VMworld in Barcelona, VMware made an important series of announcements that both clarified strategy and demonstrated significant progress in some important areas. Continue reading VMworld 2013 Update: VMware’s Management Strategy
Everybody in IT knows by now that flash memory is redefining the enterprise storage industry, mostly by decoupling performance from capacity. Most storage vendors are happy to just add flash to their existing product lines, often using it as cache, or as a storage tier handled transparently within the array. Few vendors take the opportunity to rethink the way storage works, though, from the basics of performance to how it meshes with the idea of public & private clouds. Coho Data, coming out of stealth mode with their first product, the DataStream, does just that. Continue reading Coho Data DataStream
Ask any virtualization administrator what their major pain points are and the first thing on the list will be storage. It isn’t surprising. Storage was likely the first major bottleneck for virtualization, back when it was “the Internet” and not “the cloud.” And as any IT person can tell you, there are two ways storage can be a bottleneck: performance and capacity. Traditionally, the problem of capacity is less complicated to solve than that of performance. To gain capacity you just add disk. To gain performance you needed to select a disk form factor (2.5″ or 3.5″), connection technology (SAS, iSCSI, fibre channel), rotational speed (7200, 10000, 15000 RPM), sometimes a controller (do I get the Dell PERC with 512 MB of cache or 1 GB?), and do the math to figure out how many disks you need to match both the problem of your I/O and its corollary: the problem of your budget. Complicating things, virtualization turned most I/O into random I/O. What might be a nice sequential write from each virtual machine looks pretty random in aggregate. Of course, random I/O is the hardest type of I/O for a disk to do. Continue reading Caching as a Service