Strategy for Cloud Automation

Strategy for cloud automation: there are a lot of articles about the cloud and cloud computing, but I have not seen too many that discuss different strategies to consider when it comes to the automation in your environment. I did come across a nice post called “Legacy Job Schedulers: 3 Effective Exit Strategies to Consider,”1 by Jim Manias from Advanced Systems Concepts, Inc., that had some interesting points and thought it would be a great topic for discussion.

In this post, Jim Manias starts old school with a reminder that the early stages of automation were managed via schedulers from the host system to kick off the scripts when triggered either manually or from an event. Actually, for all practical purposes, if you use PowerShell for any of your automation needs, chances are you have used the Windows Scheduler in one form or another.

There are a number of out-of-box automation solutions that give you access to predefined production workflows that can help cut out the need for custom development. This helps support the concept that the operations team should be the team that drives the automation. However, the term “predefined” often has a secondary definition of “limitation.” You still have the ability to create the “custom” code needed to remove any limitations you encounter with the predefined workflows. The developer role will still be needed for any kind of DevOps model, but I have a strong belief that you can teach the Dev to Ops, but cannot really teach the Ops to Dev. Let me further clarify that statement: operations can or will spend a portion of their time (more so in the larger environments) writing scripts to maintain and administrate the environment, but you do not see the Dev team performing the day-to-day operations, and that is where the breakdown begins.

I am going to disagree with Jim in that I believe that most scheduling systems are going to get updated over time, whether Linux, Windows, or a third-party scheduling system. Upgrades and updates are pretty much expected, so I do not see that as a specific pain point for the general population. However, there may be specific cases where the scheduling system is truly a legacy system. Possibly, just not probably, in my opinion.

Now, here is where Jim and I are in full agreement. Considering that the term “hybrid cloud” is all over the news, maybe it is time to start talking about hybrid automation solutions. Think about this for a second. If you are considering migrating from one automation engine to another, the most logical approach would be to build the new automation engine in parallel to the old system so that a clean cut-over could happen. In all likelihood, you would try to take advantage of the previous code or workflows that could be reused in the new environment. If APIs are present, this makes the migration that much easier, with you having the ability to trigger workflows on both the new and the old as needed. This can help with any migration to a new automation engine or system.

Let me take the opportunity to review for a moment. In addition to the need for migration to a new automation system, with most modern-day automation engines giving you the ability to embed multiple scripting languages in a single platform, there is still going to be a need to utilize multiple automation scheduling systems in your environment for both Linux and Microsoft-type environments. In my opinion, automation can be more about the different API connections that can be made. So, with that thought in mind, the idea of connecting multiple automation platform APIs together should be nothing more than connecting any other systems. I am not sure there is too much of a pain point in keeping a scheduler up to date, but I do think that there will not be any one solution or automation engine that will be able to do everything you truly need. As well, I do not believe that you will ever be able to fully get rid of the development side of things when that is needed to make a thing truly your own. That said, I am going to double down and repeat my statement that you can teach operations how to code, but it will be a struggle to try to teach operations to development. Find the right combination to create the needed DevOps in your own environment.

1Reference: Legacy Job Schedulers

Share this Article:

The following two tabs change content below.
Steve Beaver
Stephen Beaver is the co-author of VMware ESX Essentials in the Virtual Data Center and Scripting VMware Power Tools: Automating Virtual Infrastructure Administration as well as being contributing author of Mastering VMware vSphere 4 and How to Cheat at Configuring VMware ESX Server. Stephen is an IT Veteran with over 15 years experience in the industry. Stephen is a moderator on the VMware Communities Forum and was elected vExpert for 2009 and 2010. Stephen can also be seen regularly presenting on different topics at national and international virtualization conferences.
Steve Beaver

Latest posts by Steve Beaver (see all)

Related Posts:

Leave a Reply

Be the First to Comment!

wpDiscuz