Earlier today, the owner of a recruitment company asked me, “What’s wrong with dumps?” I’ve seen many blog posts over the years asking about the usefulness of qualifications, usually written by IT veterans with years of experience looking back on a career that, in some cases, was made on the back of good certifications, and in others was made despite the lack. In all cases, though, there is an assumption that the certifications are properly come by. There is never mention of cheating. There is sometimes an issue of impartiality (just how much use can a qualification be if the adjudicator has a bias towards people passing?), but that is as far as discussion goes. But of course there are cheats. Every system can be circumvented; every system at some point is circumvented.
Transformation & Agility
Transformation & Agility concerns the utilization of the technical agility derived from the benefits delivered by virtualization and cloud computing, coupled with Agile Development practices that improve business agility, performance, and results. This includes the agility derived from: (Read More)
- The implementation of Agile and DevOps methodologies
- The application and system architectures
- The implementation of IaaS, PaaS, and SaaS clouds
- Monitoring of the environment, coupled with processes for resolving problems quickly
- Having continuous availability through the use of high-availability and disaster recovery products and procedures
Transformation covers the journey from A to Z and all points between: how you get there and the roads you will travel; how decisions made on day zero or one, or even day three, will affect later decisions; and what technical, operational, and organizational pitfalls can be associated with an implementation. We examine what tool sets are required for Agile Cloud Development, and it delves into other aspects of Agile Development that integrate with cloud computing, SaaS, and PaaS environments, including DevOps, Scrum, XP, and Kanban.
I wrote a little while ago about running a serverless platform on-premises. I have since realized that there are a few more things that we need before such a platform is useful. Serverless is just a way of doing application code execution. Most applications need more than execution. At minimum, they need some sort of storage and some trigger mechanisms to tie together the execution. A serverless platform by itself will not solve many problems. To enable your developers to use on-premises serverless, you need a few other on-premises services. Applications that use serverless also need storage and web services that integrate with the serverless platform.
When we think about networking, we think about things that go bump in the wire—things that place bumps in the wire. Such things could be switches, load balancers, firewalls, routers, gateways, etc. The list is not all that long, thankfully. Things that put bumps in the wire are at odds with software-defined networking (SDN). SDN relies upon key services to exist. These services are DNS, identity management, and key management. Without these, many systems would fail outright. However, they are not considered network functions. Network functions are considered to be the bumps in the wire we need to make applications work. The goal of network functions virtualization (NFV) is to streamline this process, to reduce complexity while maintaining compatibility. NFV and SDN together lead to an interesting mix of hardware and software, and some of these just do not interoperate well. Is there a better solution?
In a discussion I had yesterday, I noticed that the networking world still has many arbitrary boundaries. It is what we do: create boundaries where none really exist. We do this to cut a problem down to size. Yet when that itself becomes the problem, we end up with design decisions based on our boundaries. We need networking, specifically software-defined networking, to ignore most boundaries. We need to move away from terminology that imposes those boundaries upon our designs. Virtualization is about breaking silos, not imposing them. Network virtualization needs to do the same.
The world is moving to containers! Hop on board, or the train will leave you behind! Woah, stop right there—take some time to analyze and think about what you are doing. Do you need to rewrite your code? Refactor your infrastructure? Recreate your environment? All of these will take time, money, and experience (knowledge). Get on the track, of course, but where you get off will depend on many factors. There are several first steps you need to consider. There are several pitfalls waiting for you. Learn from those who have gone before you. As with any strategy, whether business or game, you need a plan to move forward. A plan to iterate upon. A plan to reach your goal. We call that an architecture in some cases and, in others, a design. Where are you along the tracks?
There is a lot of talk of having enterprises build and operate IT infrastructure the same way hyperscalers do. AWS, Google, and Microsoft can build and operate cloud platforms that are very cost effective. The logic is that enterprise businesses can use the same techniques to build and operate their own efficient data centers. I believe that there is some merit in large enterprises trying to follow the hyperscalers’ methods and models. I also think that the nontechnical parts are far more important than the hardware and software selection. We come back to the three parts of a solution: people, process, and technology. Most enterprises look only at the technology part of hyperscale and miss the place where the real efficiency occurs. Hyperscalers are all about minimizing the people and optimizing the processes.