As I’ve thought about how to implement high-performance, very large-scale networks within a secure hybrid cloud, I have come to the conclusion that the cloud works best with disaggregated network functions. This is the goal of network function virtualization, or NFV, but the real problem is knowing what functions to virtualize and how to do so at scale. Very large scale. We need to consider the multipaths our data will take and the rates at which data can pass through the various virtual components of our system that makes up the hybrid cloud. When we think hybrid cloud, we need to think scale out, not up. Scaling up can cost lots of money, while scaling out may save dollars. This means we need to rethink networking and security as well as protection. With containers on my mind, we have a path for our journey.
Transformation & Agility
Transformation & Agility concerns the utilization of the technical agility derived from the benefits delivered by virtualization and cloud computing, coupled with Agile Development practices that improve business agility, performance, and results. This includes the agility derived from: (Read More)
- The implementation of Agile and DevOps methodologies
- The application and system architectures
- The implementation of IaaS, PaaS, and SaaS clouds
- Monitoring of the environment, coupled with processes for resolving problems quickly
- Having continuous availability through the use of high-availability and disaster recovery products and procedures
Transformation covers the journey from A to Z and all points between: how you get there and the roads you will travel; how decisions made on day zero or one, or even day three, will affect later decisions; and what technical, operational, and organizational pitfalls can be associated with an implementation. We examine what tool sets are required for Agile Cloud Development, and it delves into other aspects of Agile Development that integrate with cloud computing, SaaS, and PaaS environments, including DevOps, Scrum, XP, and Kanban.
One of the big trends of 2016 was the rise of “serverless” application architectures. The most visible was AWS’s Lambda product, but Microsoft has Azure Functions, and Google has Cloud Functions. But what about organizations that want serverless but must run their IT on-premises? Cloud services are not an acceptable option for some businesses, often due to regulatory limitations. Other businesses need a range of options to suit different needs, such as different cost and performance profiles. Is there any way to have serverless on-premises? The cloud fan’s usual objection is scalability: no on-premises data center has the scale of a cloud provider. On-premises clouds only need to cope with the scaling of one organization. Private clouds also benefit from far greater visibility to the business cycles. Private cloud peaks are somewhat predictable. I think that more relevant issues are complexity and skills. Does the IT team have the expertise to build and operate a serverless platform?
With cloud monopolizing many IT discussions, a great many organizations are somewhere between dipping their toes in and having one foot fully in the cloud. Many get started with Office 365. As with any new technology, embracing it involves learning, planning, and yes, making a few mistakes, before making the plunge.
RightScale just published its annual report on the state of the cloud, and some of the key findings are very interesting. Topics range from cloud vendor market share to cloud adoption concerns, DevOps tools adoption, public vs. private cloud adoption, and much more. Below, I highlight the major findings I thought most interesting and follow each with my perspective on it.
Over the past few months, I’ve been writing about my engagement with a global organization and its journey of transformation into a more agile organization, driving business enablement. One thing has remained missing: real leadership. This large corporation has thousands upon thousands of people, and many of them are in “leadership” roles. The problem here is that no one is ready to understand the underlying lessons at play or is able to apply those lessons to their own or their organization’s benefit.
There is a growing movement to encrypt everything. I prefer encrypting specific data, not everything. However, modern CPU chipset features have sped up encryption so much that encrypting everything is a valid option. Encryption requires one to have access to the keys or the related encryption secrets. Those secrets need to be at the fingertips of your applications or management tools. Encryption secrets should be readily available to an application. How do we achieve this? The February 9, 2017 Virtualization and Cloud Security Podcast addresses this issue. In this podcast, Virtuozzo’s Chief Software Architect, Pavel Emelyanov, joins us to discuss container encryption.