We’ve touched on Red Hat’s Cloud strategy in a number of posts. To summarize they’re trying to play at all levels in the stack, from IAAS and PaaS through to hypervisor and of course operating system. All layers are open, and as you get further down the stack towards virtualization they are pushing KVM but they are clear that they have to co-exist with Microsoft and VMware.  In the IaaS layer they have DeltaCloud, which is nominally open but is really a Red Hat product with an open veneer. In the PaaS layer they have a stack of really good middleware from JBoss, and an openness to a whole bunch of Java/JVM and non-JVM languages.  They’re punting this out to the world as OpenShift.

So far, although there are nuances that differ from other vendors, the main conclusion is that each individual layer is comparable to offerings from competitors.  However, there is one layer that sets Red Hat apart from competitive offerings, known as MRG – Messaging Realtime and Grid, pronounced “Merge”.  If you’re wondering what this is, it seems also that some of are bits of Red Hat’s marketing department that haven’t got a clue either because the market positioning is a bit vague.

Our understanding of this layer is that it is “Messaging” as in IBM’s Websphere MQ, but instead of it being targeted at loosely-coupled systems (Mainframe to non-mainframe coupling etc.), it is messaging inside tightly-coupled distributed memory systems, such as large-scale supercomputers – which these days are constructed of very large numbers of servers connected in a Grid (and are therefore quite similar to a Cloud). There are two major determinants of overall performance in the grid, the node-to-node bandwidth (which is also influenced by the grid topology and routing), and the amount of latency in the messaging stack at each end of the communication, or more specifically the amount of CPU processing that is wasted in the process of communication. The reason for this is that it limits the fine-ness of the grain size into which it is sensible to decompose problems, thereby reducing the application’s ability to make use of massive parallelism.

Red Hat’s MRG 2.0 release has focused on getting this overhead down for the emerging standard 10G Ethernet technologies rather than the RDMA Infiniband it previously supported, thereby reducing the price point of the interconnect, and making it more relevant to a standard Cloud environment, which tends not to invest in esoteric networking hardware.  Associated with this an improvement to the efficiency of the distributed scheduler for the jobs into which the problem is decomposed

Between “Messaging” and “Grid” the third element of the MRG is “Realtime”, and MRG provides a realtime kernel to allow messages to be serviced from queues within defined timelines which is useful for battlefield simulations and the like.

Grid computing at this scale can be rather esoteric.  as well as the military it is the domain of particle physicists, nuclear weapons simulations, astronomers, and emerging applications in the life sciences, but there are commercial applications in Hadoop and other map-reduction platforms that are used for offline data analysis, as well as in low-latency trading systems for financial markets. So in some sense the MRG announcement can be seen as a response to VMware’s NYSE cloud announcement, suggesting there is a better technology fit for trading applications on Red Hat than there is on VMware.

The marketing push around MRG 2.0 seems to indicate that it will be more broadly adopted in Red Hat’s Cloud strategy, providing scalability enhancements for more general-purpose applications.  It is, however, hard to see how this will happen for most enterprise applications based on J2EE or other middleware because they simply won’t be engineered to use the MRG stack, except perhaps in the PaaS or IaaS management layer.

There is no doubt that MRG is a differentiation between Red Hat and other players, and it does have its  uses  in both commercial and scientific applications, It is hard to see, however, that Red Hat will be able to leverage its technological advantage in this one area to provide a benefit in a more general marketplace.  In many ways it highlights one of Red Hat’s problems, although successful it isn’t actually a very large company and yet it is seeking to compete across all layers in the stack and to appeal to both commercial and scientific computing communities.

Share this Article:

Share Button
Mike Norman (104 Posts)

Dr Mike Norman, is the Analyst at The Virtualization Practice for Open Source Cloud Computing. He covers PaaS, IaaS and associated services such as Database as a Service from an open source development and DevOps perspective. He has hands-on experience in many open source cloud technologies, and an extensive background in application lifecycle tooling; automated testing - functional, non-functional and security; digital business and latterly DevOps.

Connect with Mike Norman:


Related Posts:

Leave a Reply

Your email address will not be published. Required fields are marked *


− one = 8