At EMCworld 2013, one of the big stories was Pivotal and it’s importance to the EMC2 family and the future of computing. Pivotal is geared to provide the next generation of computing. According to EMC2 have gone past the Client-Server style to a scale-out, scale-up, big data, fast data Internet of Things form of computing. The real question however, is how can we move traditional business critical applications to this new model, or should we? Is there migration path one can take?

When I asked this of other PaaS vendors recently, they did not have one either and Pivotal is not much better. How can I migrate from where I am now with traditional applications to a platform on which I want to build those same or new applications, I am given answers that are not migratory but green field. They quote the 100 million dollar investment by GE in Pivotal. Granted that is a major investment and shows a dedication to the new method of doing things, but how will GE migrate to the new platform.

There needs to be some well defined migration path for most people to move as it is very expensive to maintain two development staffs. For example, I would expected to hear something like this first:

For traditional database backed services, the Pivotal migration path is to implement the Pivotal data fabric (gemfire) to abstract the database layer. This is the ‘fast-data’ path of Pivotal.

For Big Data the Pivotal migration path is to add in their version of Hadoop. This is the ‘big-data’ path of Pivotal.

The above seems logical and more marketing, but once you have these pieces in place, how do you make more use of them to implement tier 1 applications within the new frameworks. Do you suddenly jump from these tools or similar tools from other platform providers to use of the actual platform, its supported programming languages, and tools? Which begs the question why use Platform as a Service?

PaaS provides a mechanism to abstract the operating system, networking stack, and other aspects of the underlying environment. Instead of worrying about the networking stack for example, you just use tools to open sockets and send data. The underlying layers take care of the nuts and bolts for you. The same is true for database actions.  So why would you not use a PaaS for future development as it abstracts so much and provides a common environment for today and future use? It ties in with DevOps and other Agile Cloud Development capabilities.

However, none of these things come up when you discuss migration from traditional environments to PaaS due to the investment organizations have in existing tools and practices. They seem like great ideas but we need clearer direction to get to the next generation of applications.

Before Pivotal, VMware was espousing the need for Cloud-Based applications and invested heavily in tools that could possibly bridge this gap: ThinApp, Spring, Gemfire, and others. But these tools themselves do not provide a migration path, they are tools that could be used as part of a migration path, yes, but they are not the migration path. Pivotal now owns these tools, and there is still no distinct migration path. Each organization is different enough that a general blueprint to move from today to tomorrow is not possible? Or is it?

I think it is possible, but using PaaS is a true disruptive force as it requires you to almost rip and replace existing code bases for new code bases. You may even be forced to change the language which you use to develop your applications (depending on what the PaaS supports). The real issue for such a migration is how much of the existing code base can be reused during this migration. So perhaps the real steps are to not only replace data path aspects but to first evaluate your existing code base to determine what has to change versus what can be reused. This would require familiarity with the new PaaS architecture, the language currently being used, and the older code bases.

How does your organization plan to move to big and fast data? Is it rip and replace, or do you have some migration path specific to your own code base? Are their commonalities that would be helpful for others?

Share this Article:

Share Button
Edward Haletky (368 Posts)

Edward L. Haletky, aka Texiwill, is the author of VMware vSphere(TM) and Virtual Infrastructure Security: Securing the Virtual Environment as well as VMware ESX and ESXi in the Enterprise: Planning Deployment of Virtualization Servers, 2nd Edition. Edward owns AstroArch Consulting, Inc., providing virtualization, security, network consulting and development and The Virtualization Practice where he is also an Analyst. Edward is the Moderator and Host of the Virtualization Security Podcast as well as a guru and moderator for the VMware Communities Forums, providing answers to security and configuration questions. Edward is working on new books on Virtualization.

[All Papers/Publications...]

Connect with Edward Haletky:


p5rn7vb

Related Posts:

Leave a Reply

Your email address will not be published. Required fields are marked *


7 × = sixty three