I just returned from a week in Las Vegas at AWS re:Invent, Amazon Web Services’ annual conference. I have either attended or watched the live stream every year for the past several years, and I am continually amazed at the number of new services and features that AWS cranks out annually. During the course of each year, I keep reading about how the other public cloud providers are gaining ground on AWS. However, I am not seeing that. Amazon is dominating with large enterprises and Fortune 500 companies. Many of the big wins from the other cloud providers are in companies looking at multicloud strategies or targeting specific workload types (e.g., Google for big data workloads).
Articles Tagged with analytics
Software-defined storage (SDS) is about data services. Many think it is about automating storage. Yes, I can see that, but it is about what storage can deliver. So, what is the basis for SDS? There are four critical components. These components are analytics, augmentation, aggregation, and security. These four elements wrap storage to become data services. Data services and control thereof are therefore the key components of SDS. What data services can SDS provide that do not already exist? Is it just enough to add deduplication, or is more necessary? Let us look at these data services in detail.
Secure Agile Cloud Development takes Agile and DevOps to the next level. It is about code quality, based not just on what the developers test, but also on the application of continuous testing and on dynamic and static code analysis. Most importantly, it is about a repeatable and trackable process by which we can make code quality assessments. We can find out the “who did what, when, where, how, and why” of our code. It is a useful tool in incident response. Imagine a world in which our production environments are run entirely by code.
Let’s start the new year right with one of my current favorite topics for discussion: automation. In this article, I concentrate on the second-day operations type of automation. Second-day operations is quite a different beast from build and decommission automation, in that it incorporates several different approaches to automation.
After the Apollo 1 disaster, astronaut Frank Borman told Congress that the tragedy had not been caused by any one company or organization, but by the entirety of all those involved with the Mercury, Gemini, and Apollo missions. The problem had been a failure of imagination. They knew that at some point there would be a fire in a space capsule. However, they assumed it would take place in space somewhere. They just did not think about the possibility of fire while the capsule was still on earth. We call this failure of imagination “unknown unknowns” within the security world, but it boils down to the same thing. We just do not think about some things. Even with all the tools out there to help us, we have failures of imagination.
We all need performance and capacity management tools to fine tune our virtual and cloud environments, but we need them to do more than just tell us there may be problems. Instead, we need them to find root causes for problems, whether those problems are related to code, infrastructure, or security. The new brand of applications, if designed for the cloud à la Netflix, or older technologies instantiated within the cloud need more in order to tell us about their health. Into this breach comes a new set of tools, as well as an existing set of tools.