While at Interop I participated in a Tech Field Day event where Spirent was talking about their new Axon product as well as the possibility of usage of Blitz.io. It was an interesting discussion but gave me some food for thought. As we move to cloud scale apps based on platforms such as Pivotal (EMC World was just down the street), OpenShift, and others, we need a way to test those applications at scale. Spirent and Ixia provide these tools, but would they be used in this new model.
At EMCworld 2013, one of the big stories was Pivotal and it’s importance to the EMC2 family and the future of computing. Pivotal is geared to provide the next generation of computing. According to EMC2 have gone past the Client-Server style to a scale-out, scale-up, big data, fast data Internet of Things form of computing. The real question however, is how can we move traditional business critical applications to this new model, or should we? Is there migration path one can take?
We recently had a conversation with DataStax regarding their DataStax Enterprise product, which got us to thinking a little about the nature of Big Data and Cloud. DataStax is the company behind the Open Source Cassandra NoSQL database. It provides technical direction and the majority of committers to the Apache Cassandra project.
• • 1 Comment
The Virtualization Practice has released a major update to its Application Performance Management for Virtualization and Cloud White paper. This paper covers both Application Performance Management for custom developed applications (DevOps), and Application Performance Management for every application (purchased and custom developed) in production (AppOpps).
Customers’s using PaaS Cloud offerings like Heroku are clearly reliant upon both Heroku and partnering monitoring vendors like New Relic to provide complete information about PaaS Cloud Application Performance. Being fully transparent in this regard is likely to prove to be both a technical and a business challenges for the PaaS cloud vendors.
The software defined data center has the potential to expand the control plane well outside of anyone’s control by the simple fact that we do not yet have a unified control mechanism for disparate hardware (networking, storage, and compute), for disparate hypervisors (vSphere, KVM, Xen, Hyper-V), new types of hypervisors (storage and networking), and new ideas at managing SDDC at scale.
The next evolution of virtualization is the Software Defined Data Center or SDDC and it is quickly becoming the next logical step in the continued evolution of cloud technology that will give you the ability to run legacy enterprise applications as well as the other cloud services. In my opinion you could also define Software Defined Data Center as a converged datacenter so to speak. My friend and colleague, Edward Haletky wrote a great post on SDDC and data protection, which raised this question. How the heck to we recover SDDC?
I have written about the Public Cloud Reality and the need to bring your own security, monitoring, support. This was reinforced by Dave Asprey of Trend Micro at the last Cloud Security Alliance Summit held at this years RSA Conference. The gist of Dave Asprey’s talk was that YOU are responsible for the security of your data, not the cloud service provider.