Come on, let’s get real here. The software-defined data center may become the norm in two years in the gilded cages of Silicon Valley, North Carolina’s Research Triangle, and the other “centers of excellence” out there. But in the real world—you know, the one where companies are still using NT4 servers to deliver real and useful work—surely this is not the case.
Articles Tagged with SDDC
As your software-defined data center (SDDC) grows, so does the quantity of privileged accounts. This was the discussion on the Virtualization Security Podcast of February 13, 2014, where we were joined by Thycotic Software. Privileged accounts are used by administrators and others to fix issues, set up new users, add new workloads, move workloads around your SDDC, harden those workloads, and perhaps even log in to just pull down logs for further use. The list of reasons to use privileged accounts is as endless as your system administrator’s stack of work. Yet today, almost always, access to these accounts is made by those who know the password.
Many network virtualization products appear to be aimed at the top 10,000 customers worldwide, accounting for their price as well as their published product direction. While this is a limited and myopic view, many claim it is for the best, their reason being that network virtualization is only really needed by the very large networks. The more I think about this approach, the more I believe it is incorrect. Let us be frank here. Most networking today, within many different organizational sizes, is a hodgepodge of technologies designed to solve the same problem(s) over and over: how to get data quickly from point A to point B with minimum disruption to service.
The Software Defined Data Center: That was pretty much the biggest takeaway from this year’s VMworld in San Francisco. VMware made announcements about the new vSAN that will be coming out soon to enhance software defined storage and about the NSX platform that addresses one of the final hurdles on the path to finally having a completely software defined datacenter, network virtualization. There have been plenty of write-ups on these topics, including one very good post from one of my colleges, Bernd Harzog. I am not going to go into any details on those announcements except to say that VMware is expanding and putting themselves in a good position to be the center of the virtual universe. I believe that it will take some time for software defined networking to really take off. My gut is telling me that it will be a slower process at first, just like the adoption of server virtualization, but when it does really take off, I believe the end result will have the potential to have just as great or even a greater legacy than server virtualization has.
At the US VMworld 2013 conference, VMware did an excellent job of explaining how network virtualization and storage virtualization were going to work. Adding network virtualization and storage virtualization to the existing virtualization of compute (CPU and memory) along with API’s and policies to manage the whole thing is what creates a software defined data center.
There has long been a debate about testing products within a virtual environment. Not just on how, but the why as well as the what to test. There are limits in some EULA’s as well on the reporting of such testing. This was the subject of the 7/25 Virtualization Security Podcast (#112 – Virtualization Security Roundtable) held Live from NSS Labs in Austin, TX. Where we delved into the issues of testing within a virtual environment. While the discussion was about security products, it is fairly straight forward to apply the concepts to other products within the virtual environment.