This post is the first in a series of 4 covering different types of automation methods.

Organizations that want to deliver application, service and environment changes in a quick, consistent, and safe manner and with a high level of quality, invariably need a good and flexible automation system.

Choosing the correct automation method from the start can make the transition to continuous or automated delivery a lot easier.

Current tools for delivering applications in the DevOps and ARA space usually fit into one of these three automation paradigms:

  1. Process or script driven
  2. Model driven
  3. Rule driven

In the past, and in many realms of automation systems, process/script-driven approaches have prevailed. More recently, especially since the inception of the cloud and software-defined infrastructure, model-driven automation has become increasingly popular.

However, as I will explain in this blog series, both process and model-driven automation have serious drawbacks. These drawbacks tend to result in a considerable amount of rigid processes that are expensive in terms of maintenance and evolution, or simply very hard to introduce in the first place. In addition, delivery quality can be seriously impacted.

This blog post is the first of 4 that will elaborate on each paradigm individually, discussing their approach, features, and drawbacks.

Process or script-driven automation

The first paradigm I want to discuss is script or process-driven automation. It is a method based on a clearly defined, start-to-end process flow that defines how change is delivered to destination environments, typically a series of procedural steps executed to deploy the entire application.

But consider this: such an approach can become painful due to the trend that requires today’s developers to build scalable applications and apply a strategy based on hybrid clouds and platforms to attain flexibility, continuity at the lowest possible cost. With multiple platforms in mind, a process/script based solution means you need a unique process for each platform, cloud, application, and/or environment.

Since scripts are unique to each combination of process, environment, and app, they are called tightly coupled. As a result, deployment processes may need to be rewritten several times throughout the lifecycle of the application — any time the application, middleware, or components change.

Another weakness of the process-driven approach is that it does not offer the opportunity to align teams around the same toolset. In fact, it encourages individual teams to maintain their own catalogue of custom-built scripts, in their technologies of choice, and prevents the rest of the organization from benefiting from their investments. This is the worst possible way to implement DevOps practices and instate lean principles within application delivery as it nurtures silos and obstructs collaboration and sharing – while often making the delivery toolchain unnecessarily complex.

Coding: an analogy

Let’s make an analogy within software development itself to make this clearer.

Process or script driven deployment is in a way analogous to coding using a particular programming language or IDE. Since each development team writes code in a different language, it does not offer the opportunity to align teams around the same code set or make it easy to share code across teams.

Although in recent times programming languages have become much more platform neutral, recall that in the old days, languages were very bound to platforms and machines as well as they included specific coding instruction sets closely supporting the platform or machine they supported. This also resulted in issues with sharing or migration of code across platforms. This is very similar to the issues I see with scripts/processes for deployment.

So, in summary, a process is defined as a series of steps that contain information about the environment, the changes being introduced and the deployment logic to implement the change process for the affected platform(s).

Process-driven automation

The main features of process-driven automation include:

  • Directly represents how a sequence of execution steps need to be performed.
  • Represents both human actions and machine integration processes.
  • A process represents how the environment changes, but does not understand the environment layout.
  • They are tightly coupled.
  • Logical steps contain hard-coded information about settings, context, and environment, that make it difficult to abstract.

Today many processes are implemented using software-based flowchart diagrams like the one below:

Flowcharts are very well known from computer science textbooks. They were initially meant to represent control/sequencing logic at a conceptual stage. Its original use was not intended for execution or automation, certainly not in the context of day-to-day automation of the complex delivery logic that can be found in most IT departments within decent sized organizations.

Especially sequencing delivery logic drives delivery complexity. Why? Because the sequence “depends” on many different events that occur during the process. Representing all options up-front can be challenging. Often as a result, the process charts become unwieldy to read and oversee when the complexity of the delivery process is high, which is often the case when there are a lot of application interdependencies.

Simple but risky

To conclude, process or script driven automation is often perceived be simple, especially if the automation tool is very closely aligned with the platforms it supports (same as with a programming language in the analogy I used) and if delivery complexity is low. The approach is very well appreciated by developers because it gives them a solution and the power very similar to what they experience within their development environments: autonomy and control being the most important ones.

The biggest challenge I repetitively hear about is its tight coupling, resulting in many different “clones” of the same script/process being used for deployment onto different environment or for different applications etc. Unless there is a strong governance process supporting the change and use of such processes, the risk can be very high that what was put into production is not entirely the same as what was tested in UAT or QA… Not sure business users and product owners will appreciate that thought.

In my next blog session, I will take closer look at model-driven automation, a more recent alternative to handle deployments with more opportunity for reuse and far less coupling.

Read next:

Get an early start and try Clarive now. Install your 30-day trial here.