There is a class of applications that is extremely difficult to virtualize. This group consists of graphics intensive applications such as ProEngineer, Photoshop, and pretty much anything that requires a GPU to perform well. Graphics intensive applications make up a list of applications that are usually too big or expensive to virtualize. The last mile so to speak. This is NO longer the case. With NVIDIA’s announcement of the NVIDIA VGX Cloud Platform. This and other classes of applications can now be virtualized.
There seems to be a myriad of definitions of who is a tenant when it comes to secure multi-tenancy. This debate has occurred not only within The Virtualization Practice as well as at recent Interop and Symantec Vision conferences I attended. So who really is the tenant within a multi-tenant environment? It appears multiple definitions exist and if we cannot define Tenant, then how do you build secure applications that claim to be multi-tenant?
There are many SaaS and Security SaaS cloud services out there, but they all lack one thing: full visibility. Why do these cloud offerings limit the ability to perform compliance auditing, forensics, and basic auditing against an organizations data retention, protection, and other necessary policies? Why not just grant the “right to audit”, or better yet, build a way for each tenant to perform their own audit down to the hardware? Why limit this by leaving it out of contracts as well as the technology? It is all feasible.
Cloud Computing ...
• • 1 Comment
Many of the virtualization security people I have talked to are waiting patiently for the next drop of leaked VMware hypervisor code. But the real question in many a mind is whether or not this changes the the threat landscape and raises the risk unacceptably. So let’s look at the current hypervisor threat landscape within the virtual environment to determine if this is the case, and where such source code will impact. Are there any steps one can take now before the code drop is complete to better secure your environment?
A customer recently asked me, can we virtualize our Tier 1 App that receives 7Billion requests per day? My initial response was, on how many servers? Their answer was 15. This is quite a shocking set of numbers to consider. Add into this numbers such as 150K sessions per second, the need for a firewall, and sub-second response time and you end up with a few more shocking numbers. So could such workloads be virtualized? or is it too big for Virtualization?
We, here at The Virtualization Practice, are getting ready to have a cloud presence. Since we ‘eat our own dogfood’ with a 100% Virtual Environment, we are gearing up to move some of those workloads into a hybrid cloud. We already use some cloud resources, but now is the time to look at other workloads. Why we are moving to the cloud is three fold: how can we write about various aspects of being a tenant in the cloud, if we are not one; a recent power outage at the grid level; and a upcoming data center move. Two of these reasons are all about business continuity with the first being what we do. While we already have a cloud running within our own environment, it is time to branch out.
The OpenStack conference 2012 is full of OpenStack fans, aficionados, developers, and companies making a business using the ecosystem. However, I kept hearing that openstack was a replacement for VMware. So why is this even a possibility, and why did Rackspace and now HP build public clouds using this technology? The easy answer is to save money. But is that the only answer? What is OpenStack and why is it becoming important?
While at InfoSec World 2012’s summit on Cloud and Virtualization Security, the first talk was on Securing your data. The second was on penetration testing to ensure that data was secure. In essence it has always been about the data but there is a huge difference between what a tenant can do and what the cloud or virtual environment provider can do with respect to data protection and security. This gap is apparently becoming wider instead of smaller as we try to understand tenant vs cloud provider security scopes. There is a lack of transparency with respect to security, but at the same time there are movements to gain that transparency. But secret sauces, scopes, legislation, and lack of knowledge seem to be getting in the way.
VMware’s Project Octopus and others like ownCloud and Oxygen Cloud have stirred some interesting ideas about Application Security. Those applications that make use of SSL, nearly every web application, can make use of secure data storage for certificate verification means. What makes SSL MiTM attacks possible, is mostly related to poor certificate management. If there was a way to alleviate the need for the user to be involved in this security decision, then SSL MiTM attacks would be significantly reduced.
Join my Circle on Google+
Plugin by Social Author Bio