PaaS Platform experience – Red Hat OpenShift

As mentioned in a couple of recent posts, I have been building a prototype application using Open Source technologies that I plan to install on a number of available PaaS cloud platforms.  The application is written in Groovy (with some bits in Java) and built on the Grails framework. The choice to go with this set of technologies is documented in Why would a Developer choose VMware? and my experiences leveraging the Open Source ecosystem around Groovy/Grails is outlined in VMware’s SpringSource Ecosystem

Subsequent to those posts I still feel pretty comfortable about the technology choices, although I have had to move away from the “scaffolded” user interface provided by Grails and build some javascript widgetry that sits in the browser. For this I chose another technology I happen to be familiar with called Dojo. It’s essentially a set of compressed javascript files that can be served out from any web server, or even linked to dynamically at one of three externally-hosted sites. It can be used to make a web application look like a proper enterprise application with menus, tabbed panels, grids and charts etc. I glued it back into the Grails Controller/Domain layer via json and Xhr (Ajax).

However, my use of  Dojo did again point out the strengths and weaknesses of the Open Source community around Grails.  Yes, there is a Dojo plugin for Grails, and it does work, but it is by no means complete in its coverage of Dojo, nor is it up to date, so I ended up working much more explicitly in Javascript than I feel I should have.

After about 6 weeks of development I have now got an application that I would like to deploy out onto various PaaS clouds to have a look at how they behave.  As I mentioned in my previous post, VMware have been very successful in marketing  their Eclipse plugin to deploy directly from my development environment to CloudFoundry, and this seemed like a great place to start so I headed over to CloudFoundry to sign up. No money changed hands, but I was a little disappointed to get an email back saying I was in a queue and they’d get to me as fast as they could.  In fact it only took 24 hrs, but in the meanwhile I decided to have a go with Red Hat’s OpenShift.  I will get back to CloudFoundry in a week or two after my vacation.

OpenShift comes in three different varieties, the base offering OpenShift Express does not support java stacks, so I needed to move to their enhanced offering, OpenShift Flex.  This is back-ended by Amazon Web Services (AWS) so you need an account with Amazon as well as with Red Hat.  To create an Amazon account you will have to give a credit card number and as it turns out OpenShift will create instances for you that don’t fit within the Amazon free tier (it may be that it is possible to do so, but it doesn’t by default) so you will be charged by Amazon for your development environment.

Because Flex is architected on top of compartmentalized resources within AWS,  it is not multi-tenanted at the Platform level, only at the infrastructure level.  By which I mean that you are dealing with multi-tenancy amongst VMs on physical platforms (in that way you would be used to in working with AWS) not amongst applications running on an individual VM, or databases inside a database instance.  It feels a bit like a hybrid PaaS/IaaS solution because you see and control both the platform and the infrastructure.

The OpenShift Flex user interface is extremely simple, although perhaps a little under-documented. Basically it allows you to define a cloud service provider (i.e. currently Amazon), define a cluster of machines and deploy an application into that cluster,  and then start it up. If you actually want to do this, the best way to find out how is to watch the videos on YouTube.  The process is by no means instantaneous (it takes a few minutes), and Red Hat ask you to fill in a questionnaire whilst you are waiting.

Configuring your application infrastructure  is very simple, and although not as flexible as, say 3Tera, you get to define a basic load balancer, to choose between web servers (actually there’s only one choice, Apache), middleware (PHP, JBoss or TomCat) and databases (only one choice, MySQL), configure memcache, mongoDB or Zend framework and specify whether or not you want to send email.  I chose TomCat and MySQL because it was closest to the VMware vFabric  tcServer/ MySQL setup in my STS development environment. The point here is not the fairly limited choice of infrastructure that currently exists but the fact that you are given choices at all levels in the stack (and indeed could in principle choose an IaaS provider other than AWS).

When the application fires up it is given an external DNS from the pool of dynamically configured Amazon IPs. Presumably there is some way of mapping this through to a proper DNS entry and I will deal with this (along with other issues that apply across the various clouds) in later posts.

The process of uploading your application is interesting.  When I Googled “OpenShift Grails Deploy” I got one relevant link: someone asking how to do it, and no-one responding (well actually I did respond explaining how I had done it).  It is your AWS instance so you can simply log in, run scp or rsync or something across the machines in the cluster, and TomCat will pick up the application bundle. However, that’s not very PaaS. If you can find your WAR file, you can use the OpenShift Flex GUI to upload it.  It will decompress it into a staging area and the propagate it across the cluster for you.

The staging area also acts as  version control capability allowing you to back out changes to individual files.  Presumably there are more fine-grained ways of doing this than uploading a complete WAR file, but the really interesting thing about this is that it actually worked. I ran into a few issues with Javascript paths in the grails Dojo plugin (which is not a SpringSource supported plugin) but the bulk of the application just worked.

So we come back to this question of the pros and cons of exposing the underling IaaS layer in a PaaS service.  It leaves OpenSource Flex as a simple and transparent layer that sits on top of a fairly mature and well-documented IaaS, makes multi-tenanting less of a concern, and allows the use of the tooling of the IaaS layer for performance management etc. It also allows you to log in to your server.  You don’t get root access (the username is admin in case you’ve hit this page looking for it), but you can at least see what is going on and get access to the application server logs which (in principle) should be accessible via the OpenShift flex GUI but which don’t seem to be there in my current version.

On the other hand this separation of PaaS and IaaS does lead to a proliferation of user interfaces.  In addition to the shell console and the OpenShift Flex console where you define and start/stop clusters, you are also in the IDE where you really want a pre-configured Ant script to do the deployment to OpenShift, and you are in the AWS Management Console which shows you what is happening on your virtual machines (to be honest, if you’re being billed by Amazon you need to know whether they think you’ve shut your machines down, and it doesn’t really matter what OpenShift is saying). So the  OpenShift user interface  in providing management information and deployment capability is duplicating function that is available and  more naturally sits elsewhere.

However, this doesn’t mean that I think the explicit layering of IaaS and PaaS is the wrong way to go, particularly for the Enterprise. The truth is I haven’t yet made up my mind, and it will take a few more deployments onto a few more clouds for me to be clear. I do think this will come back to APIs and API standards and there is a real opportunity to get the deployment and provisioning specifications standardized at this stage. The OpenStack Flex GUI is written in Flash, so you can’t look at the Javascript to work out how to build a custom app or eclipse plugin to configure the platform (I did use FireBug to have a look).  There is a CLI, and I haven’t had chance to look at it, but hope to do so in due course.

Posted in SDDC & Hybrid CloudTagged , , , , , , ,