Recently I have had the pleasure of discussing security with a number of cloud providers. Specifically, we talked about what security they implement and how they inform their tenants of security-related issues. In other words, do they provide transparency? I have come to an early conclusion that there are two types of clouds out there: those that provide additional security measures and work with their tenants to improve security, and those who do not. On the Virtualization Security podcast we have discussed this many times, with the conclusion being drawn that many clouds do a better job at security than the average organization does, but that there is no way to know what is implemented, as there is no transparency. Continue reading A Tale of Two Clouds
Moving the configuration of the environment from the hardware that supports the environment to a layer of software which can collectively manage all of the storage, networking, compute, and memory resources of the environment is one of the main points of the SDDC. Once all of the configuration of the data center is moved into software, and some of the execution of the work is moved into software, SDDC Data Center Analytics will play a critical role in keeping your SDDC up and running with acceptable performance. Continue reading Software Defined Data Center Analytics
One of the great things about Splunk as both an Operations Management tool and as an Application Performance Management tool is the ease with which an astonishing variety of data sources can be fed into the Splunk data store. Splunk automatically indexes this data based upon time stamps, and stores it in a back end data store that scales out horizontally on commodity servers with commodity storage. This means that Splunk one of the very few management solutions that can scale out to accept the tsunami of management that is generated across the infrastructure and application stack in a modern dynamic or cloud based environment.
The Splunk Architecture
The wealth of data sources that can be collected an indexed by Splunk are shown in the left portion of the image below. The scaled out architecture that is how Splunk can keep up with the management data tsunami is shown in the rest of the diagram.
Now we come to the part about the good news and the bad news. The good news is that Splunk is able to be your management data store across your physical hardware, virtualization layer, operating system layer, application infrastructure layer (middleware) and the layer comprised of the applications themselves.
The bad news is that until today, if you wanted to pull all of the data together than pertained to a particular application, you had to be an expert in the topology of that application (where does it run), the virtual and physical infrastructure that supports that application (what is it dependent on) and on how to tie disparate data sources together in Splunk to create a cohesive view or dashboard. Organizations with a few (or one) mission critical application that was of such high value that it warranted a dedicated support team could easily justify the investment in learning required to pull this off. Organizations with thousands of business critical and performance critical applications saw this as an infinitely high cliff.
The Prelert Anomaly Detective for Splunk
The Prelert Anomaly Detective automatically learns the normal patterns of the Splunk data. It then automatically identifies anomalous behavior in the Splunk data and uses the ability of the Splunk Query Language to find cross-correlated data and events.
The Prelert Anomaly Detective allows for a significant advance in how customers use Splunk and its data. Today most customers use Splunk as a forensics tool to find the problem, after some other tool or user has reported the problem. The combination of the Prelert Anomaly Detective with Splunk allows Prelert to notify customers of anomalies that the customer did not even know to go look for and that can easily be leading indicators of problems that have not yet been reported.
The complete Prelert announcement is here – “Prelert Introduces Anomaly Detective, an Advanced Predictive Analytics Solution for Splunk Enterprise Environments“
In “VMware Articulates a Compelling Management Vision – Automated Service Assurance“, we detailed the strategy the VMware announced at VMworld Las Vegas in the fall of 2011. The cornerstone of that strategy was to open up a new ROI for virtualization. This new ROI is based upon OPEX savings that come from automating IT Operations, in contrast to the CAPEX savings that come the server consolidation that has fueled the virtualization industry so far. Continue reading Are we going to see real progress in IT Automation and Service Assurance in 2012?