Over the last few years there has been an increase in the number of database as a service (DBaaS) offerings that have entered the market place. IaaS providers like Amazon have released solutions such as RDS that automates database administration tasks in the area of scaling, replication, failover, backups, and more. There are a number of companies offering automation around NoSQL databases like Hadoop, MongoDB, Redis, Memcache, and numerous other database technologies.
The Cloud Management layer of the Software Defined Data Center is where the flexibility of the SDDC is translated into tangible benefits for the business constituents of the virtualized data center, the private clouds, the hybrid clouds and the public clouds. Without a robust Cloud Management layer, the IT Operations flexibility of the SDDC cannot translate into the relevant level of Business Agility for the business.
While at Interop I participated in a Tech Field Day event where Spirent was talking about their new Axon product as well as the possibility of usage of Blitz.io. It was an interesting discussion but gave me some food for thought. As we move to cloud scale apps based on platforms such as Pivotal (EMC World was just down the street), OpenShift, and others, we need a way to test those applications at scale. Spirent and Ixia provide these tools, but would they be used in this new model.
At EMCworld 2013, one of the big stories was Pivotal and it’s importance to the EMC2 family and the future of computing. Pivotal is geared to provide the next generation of computing. According to EMC2 have gone past the Client-Server style to a scale-out, scale-up, big data, fast data Internet of Things form of computing. The real question however, is how can we move traditional business critical applications to this new model, or should we? Is there migration path one can take?
We recently had a conversation with DataStax regarding their DataStax Enterprise product, which got us to thinking a little about the nature of Big Data and Cloud. DataStax is the company behind the Open Source Cassandra NoSQL database. It provides technical direction and the majority of committers to the Apache Cassandra project.
• • 1 Comment
The Virtualization Practice has released a major update to its Application Performance Management for Virtualization and Cloud White paper. This paper covers both Application Performance Management for custom developed applications (DevOps), and Application Performance Management for every application (purchased and custom developed) in production (AppOpps).
Customers’s using PaaS Cloud offerings like Heroku are clearly reliant upon both Heroku and partnering monitoring vendors like New Relic to provide complete information about PaaS Cloud Application Performance. Being fully transparent in this regard is likely to prove to be both a technical and a business challenges for the PaaS cloud vendors.
The software defined data center has the potential to expand the control plane well outside of anyone’s control by the simple fact that we do not yet have a unified control mechanism for disparate hardware (networking, storage, and compute), for disparate hypervisors (vSphere, KVM, Xen, Hyper-V), new types of hypervisors (storage and networking), and new ideas at managing SDDC at scale.
The next evolution of virtualization is the Software Defined Data Center or SDDC and it is quickly becoming the next logical step in the continued evolution of cloud technology that will give you the ability to run legacy enterprise applications as well as the other cloud services. In my opinion you could also define Software Defined Data Center as a converged datacenter so to speak. My friend and colleague, Edward Haletky wrote a great post on SDDC and data protection, which raised this question. How the heck to we recover SDDC?