NYSE Technologies is providing the very first special purpose financial cloud based on VMware and EMC technology to provide new business models where NYSE Technologies provides the plumbing for global capital markets and business agility at lower costs; encouraging brokers, and other financial institutions to build applications and test algorithms within the Capital Markets Community Platform.
They are using localized datacenters (hubs) within New York, London, Toronto, Tokyo, Sao Paolo, and others to provide the same services but within the appropriate jurisdiction of the market location. The hubs will be interconnected for data sharing based on jurisdictional requirements. Data is collected in real time within the hubs, and then made available to third parties to build applications and test algorithms. The data in use would be classified as “Big Data” so I would assume EMC’s Greenplum and Isilon products are situated within each hub to make management of the data easier. Taking “how big your wallet is” outside the equation, to allow small users to play in the big markets.
This platform allows all tenants to be on an equal playing field with respect to transaction times, data access times, and other time sensitive operations required to by the trading lifecycle as the platform datacenters are extremely close to the actual exchanges, thereby giving all tenants the same level of access.
NYSE Technologies provides an existing secure environment for their technologies with trusted administrators which the Capital Markets Community Platform will inherit. Eventually the hope is that the 1200 trusted customers will migrate to the cloud offering. While the cloud offering has trusted users and trusted administrators, it makes use of the VMware vCloud Director and vShield technologies internally to ensure that the tenants cannot see other tenants data and algorithms being run within the cloud. While this is trusted multi-tenancy, it is not secure multi-tenancy. In trusted multi-tenancy, the tenants must also be trusted as well as the administrators. This is quite acceptable in a specialized (community) cloud environment where trust is key. Even so, I wonder if this cloud will actively participate in the Cloud Security Alliance initiatives such as CloudAudit? I also wonder how the cloud will police itself for noisy neighbors, so a algorithm test that starts to take up more CPU than allowed? How will utilization caps be set so that all tenants have the same level of availability?
The environment will also include deployments of physical servers as needed by the tenants to handle those high throughput, high CPU environments. So the Capital Markets Community Platform is a cloud and hosting platform. If the cloud is built upon Cisco UCS or HP Matrix blades then even these hosted servers could be rapidly deployed. The goal is to eventually, have hybrid clouds where a tenants environment not only lives in the cloud, but at need lives within their own datacenter and makes use of the cloud environment at need. Since the rates are based on CPU usage, this may becoming an attractive way to initially enter the Capital Markets Community Platform. Will this cloud also participate in CloudAudit so that potential tenants can make intelligent choices based on compliance requirements and provide the details to auditors, as the tenants do get audited and are responsible for their compliance.
The intention of the platform is to be 100% self service, which I would assume also would imply self-service of the deployment of physical hosts as well as virtual machines, however, this aspect is not enabled yet and all deployments will be managed by NYSE Technologies until all the kinks are worked out. Given that IT as a Service and self service is a key component of a cloud, and that they are using vCloud Director, some form of self-service should not be that far off. I also see vCloud Directors lifecycle management and chargeback features being used heavily, but chargeback would need to also cap utilization of a tenant as well so that they do not become a noisy neighbor, when testing algorithms, this could always be the case.
The cloud is built on three layers with the jurisdictional issues being handled by the datacenter or hub location, the middle layer a conglomerate of data, and the top layer being applications such as back testing of algorithms against the conglomerate of data. The question remains how this data is being handled. Since it is a Big Data problem I expected EMC Isilon to be involved but it appears to instead be EMC VNX with FAST. Is asynchronous VPLEX also being used to keep the conglomerate of data in each hub in sync with each other so that algorithms can be tested in one market location for other market locations or will the tenants also need to setup services within each hub?
Trading data is pretty structured, so does the cloud also contain EMC Greenplum installations to handle the fast accessing of all that data, or perhaps it is using Hadoop functionality? Or would this be left up to the tenant to implement, or is everyone on an equal playing field for access speed to data.
As we march towards the public cloud I expect more community clouds to pop up , I also see them existing side-by-side with the public cloud for quite some time (per figure 1), as the eventual public cloud needs to meet the needs of millions, which implies heavy lifting and expensive technologies such as Greenplum and others will not be part of the general public cloud, but will continue to be needed for specific use cases such as the Capital Markets Community Platform where Trust in the tenants and administrators is required by the type of work being done.
Share this Article:
Latest posts by Edward Haletky (see all)
- Finding your Sensitive Data to Protect - March 27, 2017
- Scale and Engineering - March 23, 2017
- SDS and Docker: The Beginnings of a Beautiful Friendship - March 21, 2017