The Orchestration and Automation layer of the Software Defined Data Center is where the benefits of the SDDC is translated are translated into working applications for end users and business constituents. Every Cloud Management Platform relies upon either a script or one of these automation frameworks to provision and configure the actual end user services and applications. These solutions represent an opportunity to replace first generation automation management tools and scripts with modern declarative and model based approaches that are much more manageable and scalable.
I recently read the book Project Phoenix by Gene Kim, Kevin Behr, and George Spafford. If you are in development, IT, and Security it should be #1 on your reading list. In this book the authors discuss all the horrors we hear about in IT with a clear direction on how to fix them. There is politics, shadow IT, over zealous security professionals, over worked critical employees, lots of finger pointing. But there is a clear solution, at least as far as the story goes. We also know that DevOps works, most of the time.
Over the last few years there has been an increase in the number of database as a service (DBaaS) offerings that have entered the market place. IaaS providers like Amazon have released solutions such as RDS that automates database administration tasks in the area of scaling, replication, failover, backups, and more. There are a number of companies offering automation around NoSQL databases like Hadoop, MongoDB, Redis, Memcache, and numerous other database technologies.
The Cloud Management layer of the Software Defined Data Center is where the flexibility of the SDDC is translated into tangible benefits for the business constituents of the virtualized data center, the private clouds, the hybrid clouds and the public clouds. Without a robust Cloud Management layer, the IT Operations flexibility of the SDDC cannot translate into the relevant level of Business Agility for the business.
While at Interop I participated in a Tech Field Day event where Spirent was talking about their new Axon product as well as the possibility of usage of Blitz.io. It was an interesting discussion but gave me some food for thought. As we move to cloud scale apps based on platforms such as Pivotal (EMC World was just down the street), OpenShift, and others, we need a way to test those applications at scale. Spirent and Ixia provide these tools, but would they be used in this new model.
At EMCworld 2013, one of the big stories was Pivotal and it’s importance to the EMC2 family and the future of computing. Pivotal is geared to provide the next generation of computing. According to EMC2 have gone past the Client-Server style to a scale-out, scale-up, big data, fast data Internet of Things form of computing. The real question however, is how can we move traditional business critical applications to this new model, or should we? Is there migration path one can take?
We recently had a conversation with DataStax regarding their DataStax Enterprise product, which got us to thinking a little about the nature of Big Data and Cloud. DataStax is the company behind the Open Source Cassandra NoSQL database. It provides technical direction and the majority of committers to the Apache Cassandra project.
• • 1 Comment
The Virtualization Practice has released a major update to its Application Performance Management for Virtualization and Cloud White paper. This paper covers both Application Performance Management for custom developed applications (DevOps), and Application Performance Management for every application (purchased and custom developed) in production (AppOpps).
Customers’s using PaaS Cloud offerings like Heroku are clearly reliant upon both Heroku and partnering monitoring vendors like New Relic to provide complete information about PaaS Cloud Application Performance. Being fully transparent in this regard is likely to prove to be both a technical and a business challenges for the PaaS cloud vendors.