EMCworld 2011 was full of very interesting announcements and statements by EMC and VMware executives. They were:
EMC eats it own “Dog Food” in the form of 7-10 PBs of data with only ~2PBs of constantly in use data. The rest is historical data storage and disk libraries. They also make heavy use of VPLEX Metro (synchronous) to keep their existing data centers in sink. When they move their data center, VPLEX GEO (asynchronous) will figure heavily in their migration plans. In addition, EMC is roughly 80% virtualized with a goal of hitting 90% over the next few years. Lastly, one of the coolest aspects of EMC’s IT group is that they have an official channel back into engineering to bring up, solve, and report back on products to improve their overall functionality, availability, and capabilities. This integration is all about cloud deployments and creation.
Project Lightning was announced which in effect provides a distributed high speed storage cache within each host controlled by a Symmetric VMAX and moves storage closer to each host providing lower latency access to storage. In a related announcement, EMC announced a project to move compute processing closer to the storage by allowing ESX to run side by side within the Isilon storage stack providing increased IOPs for large data sets. This move is all about the data.
Paul Maritz dropped an interesting bomb as well, he claimed the hypervisor is the least interesting component of the modern data center stack and explained VMware’s move to the new datacenter, which includes a larger involvement in management, security, and performance analytics stack within the stack where vCenter Operations, vShield, and vCloud Director figure heavily. With vCloud to vCloud federation now available, so that security and runtime context travels with the virtual data centers as they are moved between clouds. Yes, Virtual Datacenter’s sit on top of the entire stack that is very close to the 2008 statement of a virtual datacenter operating system. This message has not really changed much but now is being restated as subtext. VMware still need to innovate, and continue to innovate with the functionality of the hypervisor. The next release should not be the last release, and I do not think it will be, there are still problems to solve and new functionality to add. VMware while they tackle the items higher in the stack cannot afford to forget about the hypervisor. While least interesting, it is like a car engine, without the engine the car does not move. Without the hypervisor, virtualization does not move. Least interesting but still important. This move is about moving the ecosystem away from concentrating on the hypervisor to higher order functions primarily cloud.
There was a combination of statements at EMCworld that provide more insight into the strategic direction of EMC and VMware Cloud offerings of the future. Greenplum is looking 5-10 years into the future when the consider applications made for the cloud and specifically big data. In addition, they plan on working with CloudFoundry, thinking about Amazon S3 compatibility, as well as the possibility of providing an SQL layer to aid in porting of Oracle and Mssql based applications. These thoughts are directly in line with VMware’s desire for Cloud Foundry, and why Redhat created OpenShift. There is a dearth of applications that can scale into the cloud except for special purpose applications such as SalesForce and mail systems. In essence, when SAP is a cloud application, cloud is here to stay. But without these applications, the cloud will stay special purpose and designed to provide infrastructure, which means VM’s data is tied to an operating system and application, instead of just to an application. Once we can tie data to an application built upon a platform, movement of data between clouds may not be limited to ‘like clouds’. Ideally, one should be able to develop in Amazon and migrate to VMware vCloud without a loss of functionality and security. Yet we need applications purpose built for the cloud to be available. The long-term view is 5-10 years for many of these applications to be developed. In the meantime, EMC has started its own integration across clouds by providing Amazon S3 compatibility within Atmos 2.0. S3 compatibility could become the cloudMotion layer where we can move data (between S3 compatible clouds) and applications that are programmed to S3 from cloud to cloud. However, this would be a cold migration approach as we still do not tie runtime, security, and networking context to our applications or even data. This is all about cloud application creation.
There is also the thought that we are heading down a path of not The Cloud but to multiple special purpose clouds with strict controls on who can access these special purpose clouds. This is one way to improve the compliance and at some level security within a multi-tenant environment. By having like data within these special purpose clouds, you can apply the same compliance and security measures to all data equally. You still need to trust your cloud and virtualization administrators however. This is all about cloud deployments in the short term.
These announcements and ideas paint a better direction for cloud development and creation than there existed even one week ago. These announcements also concentrate on the data, not the computer engine(s) within the cloud. It has alwys been about the data. With the major cloud enabler admitting we are quite a ways to cloud applications, that we need to concentrate on metrics and analytics, while also moving the ability to concentrate on just the application and data further along. The goal of cloudMotion could become a reality as this happens.
Share this Article:
Latest posts by Edward Haletky (see all)
- Secure Agile Cloud Development: Metrics - July 27, 2016
- Continuous Integration, Deployment, and Testing - July 22, 2016
- Serverless: Business Plan or an Approach to Technology? - July 21, 2016