Earlier today, the owner of a recruitment company asked me, “What’s wrong with dumps?” I’ve seen many blog posts over the years asking about the usefulness of qualifications, usually written by IT veterans with years of experience looking back on a career that, in some cases, was made on the back of good certifications, and in others was made despite the lack. In all cases, though, there is an assumption that the certifications are properly come by. There is never mention of cheating. There is sometimes an issue of impartiality (just how much use can a qualification be if the adjudicator has a bias towards people passing?), but that is as far as discussion goes. But of course there are cheats. Every system can be circumvented; every system at some point is circumvented.
I wrote a little while ago about running a serverless platform on-premises. I have since realized that there are a few more things that we need before such a platform is useful. Serverless is just a way of doing application code execution. Most applications need more than execution. At minimum, they need some sort of storage and some trigger mechanisms to tie together the execution. A serverless platform by itself will not solve many problems. To enable your developers to use on-premises serverless, you need a few other on-premises services. Applications that use serverless also need storage and web services that integrate with the serverless platform.
I noticed a tweet recently by a person I respect, Craig Kilborn. Craig had just written a blog post about why he was pleased that he didn’t pass the defense part of the VCDX. The arguments he made in the article were cogent, and I found myself agreeing with them. They aligned with my view of the worth of the VCDX certification to me personally.
I have not traveled down the VCDX path as far as Craig has, but I find myself pondering the value of the certification today. There is no doubt that the journey towards the certification is a valid one and, more importantly, a valuable learning experience. All those I have spoken to who have traveled the path, whether they gained their number or not, have grown as IT professionals.
When we think about networking, we think about things that go bump in the wire—things that place bumps in the wire. Such things could be switches, load balancers, firewalls, routers, gateways, etc. The list is not all that long, thankfully. Things that put bumps in the wire are at odds with software-defined networking (SDN). SDN relies upon key services to exist. These services are DNS, identity management, and key management. Without these, many systems would fail outright. However, they are not considered network functions. Network functions are considered to be the bumps in the wire we need to make applications work. The goal of network functions virtualization (NFV) is to streamline this process, to reduce complexity while maintaining compatibility. NFV and SDN together lead to an interesting mix of hardware and software, and some of these just do not interoperate well. Is there a better solution?
I will admit, I was surprised recently to discover that VMware has announced the end of life of its third-party virtual switches (vSwitch). These have been a part of the vSphere ecosystem for many years now. This relationship with other vendors seems to be coming to a close.
In a discussion I had yesterday, I noticed that the networking world still has many arbitrary boundaries. It is what we do: create boundaries where none really exist. We do this to cut a problem down to size. Yet when that itself becomes the problem, we end up with design decisions based on our boundaries. We need networking, specifically software-defined networking, to ignore most boundaries. We need to move away from terminology that imposes those boundaries upon our designs. Virtualization is about breaking silos, not imposing them. Network virtualization needs to do the same.