Right on the heels of VMware’s announcement of Log Insight, comes Splunk with their announcement of the beta of Hunk: Splunk Analytics for Hadoop. This is a hugely significant development for both Splunk and the big data analytics industry, as it allows customers to use the Splunk indexing, searching and visualization features on top of Hadoop data stores. Continue reading Splunk Launches Beta of Hunk:Splunk Analytics for Hadoop
For the entire (brief) history of the cloud computing business there has been too much focus upon infrastructure (IaaS) and platform (PaaS) and not enough focus upon applications. CliQr is one of a set of important new cloud management vendors whose offerings focus upon deploying actual applications in on-premise and cloud environments. Other vendors with this focus include ServiceMesh, Cloud Sidekick, Elasticbox, and, of course, VMware with vFabric Application Director. Continue reading CliQr Addresses Cloud Price/Performance with Free Cloud Benchmarking Service
There is a new set of tools available for Caching up and down the stack which we covered within Caching through out the Stack, however in reality where is the best place to cache data for your application and what are the ramifications of using such a cache. Recently, we had a caching problem, actually two of them. Both caused by the same thing, a lack of full understanding about what was being cached. For any application, the best way to cache is to cache in memory as close to the application stack as possible, which in our stack could be within the application, the OS, or even a hypervisor based disk cache. However, which does your application actually use? Continue reading Caching your Application, OS, or Storage
Recently when I was in Las Vegas for HP Discover1 I realized that the Venetian/Palazzo complex is really a cloud: Vegas as a Service. IT could learn a lot from Las Vegas, actually; I think that each hotel complex is a private cloud, and that taken together the strip is one big cloud. Granted it is a cloud that has a single purpose, but has all the earmarks of a good cloud.
Continue reading Vegas as a Service
Right now is a particularly interesting time in the world of IT. Historically, IT has swung back & forth between centralization and decentralization, closed and open, tightly controlled and loosely controlled. Lately, though, a third option has cropped up: centralized control with decentralized workloads. In my opinion it’s a function of speed, implemented through bandwidth and processing capacity. We now have enough bandwidth between our devices to start treating the device in the next rack column like a slightly-less-local version of ourselves. We also have enough bandwidth that we’ve outstripped our need for separate storage and data networks, and can converge them into a single wire, running a single set of protocols (most notably TCP and IP). On the processing side, each node is basically a datacenter unto itself. 16, 32, 64 cores per server, terabytes of RAM. The advent of SSD and PCIe flash rounds out the package, lessening the need for large monolithic collections of spindles (aka “traditional storage arrays”). The problem then becomes one of control. How do we take advantage of the performance and cost that local processing brings, but maintain all the control, redundancy, and management benefits we had with a monolithic solution, while keeping the complexity under control? And while we usually talk about doing this at great scale, can we do this on a small scale, too?
What is the point of moving the control pane out of the hardware that comprises the Software Defined Data Center into the software which comprises the SDDC? The point would be to surface that control pane through a consistent set of human and programmatic interfaces to allow for SDDC automation and orchestration. Continue reading SDDC Automation and Orchestration