There is a new set of tools available for Caching up and down the stack which we covered within Caching through out the Stack, however in reality where is the best place to cache data for your application and what are the ramifications of using such a cache. Recently, we had a caching problem, actually two of them. Both caused by the same thing, a lack of full understanding about what was being cached. For any application, the best way to cache is to cache in memory as close to the application stack as possible, which in our stack could be within the application, the OS, or even a hypervisor based disk cache. However, which does your application actually use?
By now, enterprises understand the value of Software as a Service (SaaS) and Infrastructure as a Service (IaaS), but there still is much confusion about Platform as a Service (PaaS). This confusion is one reason why enterprises have been slow to adopt PaaS. Why is there so much confusion? Because PaaS is still in its early days of maturity, but it is growing up really quickly right before our eyes.
I just returned from attending the Cloud Expo in New York City this week. The conference was dominated by private and hybrid cloud topics. There were several private Platform as a Service (PaaS) vendors attending whom I spent a great deal of time talking to as I walked the floor. It seems these days that many enterprises default to private and hybrid clouds and therefore insist on private PaaS as well. It is critical that consumers of PaaS services understand the pros and cons of both public and private PaaS before making a commitment to a PaaS deployment model.
The Orchestration and Automation layer of the Software Defined Data Center is where the benefits of the SDDC is translated are translated into working applications for end users and business constituents. Every Cloud Management Platform relies upon either a script or one of these automation frameworks to provision and configure the actual end user services and applications. These solutions represent an opportunity to replace first generation automation management tools and scripts with modern declarative and model based approaches that are much more manageable and scalable.
I recently read the book Project Phoenix by Gene Kim, Kevin Behr, and George Spafford. If you are in development, IT, and Security it should be #1 on your reading list. In this book the authors discuss all the horrors we hear about in IT with a clear direction on how to fix them. There is politics, shadow IT, over zealous security professionals, over worked critical employees, lots of finger pointing. But there is a clear solution, at least as far as the story goes. We also know that DevOps works, most of the time.
Over the last few years there has been an increase in the number of database as a service (DBaaS) offerings that have entered the market place. IaaS providers like Amazon have released solutions such as RDS that automates database administration tasks in the area of scaling, replication, failover, backups, and more. There are a number of companies offering automation around NoSQL databases like Hadoop, MongoDB, Redis, Memcache, and numerous other database technologies.
The Cloud Management layer of the Software Defined Data Center is where the flexibility of the SDDC is translated into tangible benefits for the business constituents of the virtualized data center, the private clouds, the hybrid clouds and the public clouds. Without a robust Cloud Management layer, the IT Operations flexibility of the SDDC cannot translate into the relevant level of Business Agility for the business.
While at Interop I participated in a Tech Field Day event where Spirent was talking about their new Axon product as well as the possibility of usage of Blitz.io. It was an interesting discussion but gave me some food for thought. As we move to cloud scale apps based on platforms such as Pivotal (EMC World was just down the street), OpenShift, and others, we need a way to test those applications at scale. Spirent and Ixia provide these tools, but would they be used in this new model.
At EMCworld 2013, one of the big stories was Pivotal and it’s importance to the EMC2 family and the future of computing. Pivotal is geared to provide the next generation of computing. According to EMC2 have gone past the Client-Server style to a scale-out, scale-up, big data, fast data Internet of Things form of computing. The real question however, is how can we move traditional business critical applications to this new model, or should we? Is there migration path one can take?