Cloud computing has evolved from focusing only on how to construct, secure, manage, monitor, and utilize IaaS, PaaS, and SaaS clouds. As the paradigm matures, it is moving from a pure resource management paradigm to a data and resource management paradigm. (Read More)(Read Less)
SDDC is the next evolution in on-site data center technology. It has taken the knowledge gained from the server virtualization revolution and blended it with software-defined storage and networking to create a data center defined and managed by software running on invisible hardware.
Hybrid Cloud covers the technologies and operational processes, both technical and business, for deploying, consuming, and utilizing this paradigm.
Major areas of focus include barriers to adoption; progress on the part of vendors in removing those barriers; where the lines of responsibility are drawn between the cloud vendor and the customer for IaaS, PaaS, SaaS, and hybrid clouds; and management tools that are essential to deploying and managing the cloud, ensuring its security and the performance of applications.
The vast majority of companies—any companies that have multiple sites or remote access workers—need to consider the question “Why do I need an SD-WAN?” This is no longer just the purview of the large enterprise.
First, a definition is in order. SDxCentral defines the SD-WAN as follows:
The software-defined wide area network (SD–WAN) is a specific application of software-defined networking (SDN) technology applied to WAN connections, which are used to connect enterprise networks—including branch offices and data centers—over large geographic distances.
There are two sides to cloud security: the tenant and the provider. At the moment, it appears from both perspectives that cloud computing is using bolt-ons to create a sense of security. This is more perception than reality. Perception is what is preventing cloud adoption. What does this perception mean? Are clouds really using bolt-on technologies?
How did the development of hyperconverged infrastructure (HCI) come about? Did someone decide that storage networking and storage arrays were too complex? Did a server vendor look at SANs and decide to do things differently? As far as I’m aware, it didn’t happen either way. Future HCI vendors looked at the challenges of running a virtualized data center and decided it could be done better. Those vendors tended to focus on making it easy to run VMs. Building HCI was the result of this focus on simplicity; it was not the vendors’ initial objective. Seeing the first hyperconverged vendors succeed, a number of followers then built the same thing. I’m not sure that simply building HCI is the same as trying to make running VMs easier. HCI is not the primary goal of the leading vendors, and it’s probably not the only thing the followers need in the long term.
“Service provider” and “enterprise” are often seen as opposites in networking circles. (For the purposes of this article, “enterprise” means “business” rather than “large business.”) I’m fortunate to have worked closely with both service providers and enterprises. The contrast is indeed sharp. Service provider networks are the product to be sold; they need to be fast, responsive, and connected above all else. The way a service provider network consumes equipment and services is fundamentally different from the way an enterprise does. This has more of an impact on how network function virtualization (NFV) works for the service provider than it does for the enterprise. Of course, all service providers also have an enterprise network for their back-office functions.
At HPE Discover this year, the vendor discussions were about composable infrastructure, 25 Gbps networking, VSAN readiness, GPUs, and other new, transformative concepts. These concepts require some significant software and hardware changes. Within the Hewlett Packard Enterprise portfolio, this implies some decisions may need to be made with respect to blades.