I deliberately stated the title in the “traditional” way. How to get control OVER teams.


You want to implement DevOps, so you are aware of the fact that when implementing DevOps, you need to adopt Lean and Agile principles along the way to success.

Implementing DevOps indeed not only implies the introduction of tooling to ensure end-to-end automation, but also a change in culture and different people and parts of the organization collaborate together.

What does Agile tell us?

Studying agile principles, you will have discovered that according to agile principles, you best build projects around motivated individuals/people. You should give them the environment and support they need and trust them to get the job done. Another principle is make teams self-organizing.
The best architectures, requirements, and designs emerge from self-organizing teams. Finally, you should strive for simplicity, as this is essential for flow, quality, and focus. If you are interested in further detail, I suggest you also take a look at the 12 principles behind the agile manifesto.

What does Lean tell us?

When reading about Lean, you probably have encountered the DMAIC improvement cycle. DMAIC, which is an acronym for Define, Measure, Analyze, Improve and Control, is a data-driven improvement cycle used for improving, optimizing and stabilizing (business) processes and designs. It is in fact the core tool used within Six Sigma projects. However, DMAIC is not exclusive to Six Sigma and can be used as the framework for other improvement applications.

Since in this blog I am talking about control, let’s elaborate some more on the “C” in DMAIC.
The focus in the “C” is on how do you sustain any improvements/changes made? Teams can introduce changes/improvements, but they must ensure that the process maintains the expected gains. In the DMAIC control phase, the team is focused on creating a so-called monitoring plan to continue measuring the success of the updated process.

What to remember?

From the above, and transposing this in the context of DevOps and Application Delivery, the 3 things to get better control to me are:

  • Build cross functional teams of motivated individuals, people willing to go for customer value.
  • Give those teams an environment and the support they need to get the job done. To maximize flow, remaining customer focused, and striving for quality, ensure the environment is a simple as possible.
  • Make sure it is easy for teams to monitor and measure delivery performance towards “success”.

Interestingly enough, you will not find any direct guidance in the above on how to gain control OVER teams or people, because it is against fundamental Lean and Agile principles. As a manager, you can “control” more or less by helping shape or define the WHAT, the objective, the result, the definition of success, but it is up to the team to define and decide on the “HOW”, the way to get to success or the objective. The reason I write “more or less” is because success is mainly defined by the customer, the consumer of what is being delivered, not necessarily the internal manager.

Now let’s drill a little deeper into the support a team need to get (self-)control. We mentioned a simple environment and a way to monitor and measure.

In the context of application delivery, this translates into the application delivery end-to-end toolchain environment and the way delivery activities can be monitored and measured within this environment.

Very often when speaking to customers I am hearing about toolchain environments similar to the one in the picture below:

Code-Deploy-Track by clarive

I typically see different tools used for different aspects in the delivery process, often not or hardly linked to one another. Many times, I even see multiple tools deployed within the same phase of delivery (as shown above).

Why is that? I have witnessed multiple reasons why companies have seen their delivery environment grow over the years, the most common ones being:

  • Different platforms requiring different tooling for coding and/or deploying
  • Through acquisition, different environments have been inherited, and as a result multiple tools became part of the merged organization. To avoid too much change, environments have been left untouched.
  • Companies have given their delivery teams autonomy/flexibility without proper guidance. At first sight, giving power to teams is aligned with the proposed principles, but if this is done without overall architecture or in silo, then this can lead to suboptimal conditions.

The biggest issue for organizations providing a delivery environment similar to the one in the picture above is that tracking (the end-to-end monitoring and measuring of the delivery process) becomes a real nightmare.

According to Lean principles one should monitor and measure the delivery value stream. This value stream is customer centric, so it crosses delivery phase and tooling boundaries. If measurement data is spread over 30+ tools, then monitoring performance and obtaining insight at real-time becomes a real challenge.

How to become successful?

Clarive has been designed and developed with the above in mind. The Clarive DevOps platform aims for simplicity and ultimate operational insight.

Real-time monitoring and measurement is achieved by strong automation and integration. Automating the delivery process implies automation of related process flows (such as demand, coding, testing, defect, and support flows) combined with the automation of delivery related activities (such as build, test, provision, and deploy). Clarive is the only tool that allows you to automate both within the same tool. This is what gives you simplicity! No need for multiple tools to get the job done. As a result, all measurement data is captured within a single tool, which gives you real-time end-to-end data across the delivery value chain. This is exactly what teams need to control their process and what organizations need to control/understand the overall process.

But reality is that significant investment might have been done in certain delivery areas (tool configurations or script/workflow automations) already, something the business will not easily allow to be thrown overboard as this then will be seen as “waste”. Clarive addresses this with its strong bi-directional integration capabilities allowing organizations to re-use existing investment and treat simplification as part of the improvement cycle.

Clarive enables teams and companies as a result to gain insight and control over their end-to-end delivery processes in very limited time.

Below are some sample screenshots of how Clarive provides powerful Kanban as well as real-time monitoring insight and control.

Kanban as well as real-time monitoring insight and control

assigned_swimlane by clarive


Get an early start and try Clarive now. Get your custom cloud instance for free.



In this video we will deploy Mainframe Cobol application with Clarive on the z/OS mainframe.


This will be done in a continuous way, meaning, Clarive will trigger the event of a developer pushing a revision to a GIT repository.
Event triggering invoked an event rule in Clarive.

That Rule searches the Clarive database for a Changeset with Status “In Dev”. If one is found, the newly pushed revision will be related to it.

If none found, a new changeset for the application will be created with the revision related. The changes to the sources for that revision can be seen from the Clarive UI.

Also a job will start to compile and Link the cobol sources on the mainframe.

Clarive traps the JES Spool output and makes it available in its UI. The job is generated by a versioned pipeline rule. In all of the video’s for all of the technologies(WAR, CA-Endevor package, .Net, Mobile apps) the same rule is invoked for all of the environments(DEV, QA, PreProd, PROD).

For any of the environments the application needs to be deployed to, the same pipeline rule will be used, assuring a very consistent way of deploying.

In a next video, the changeset with the mainframe Cobol revision will be related to a Release and together with changesets with other technologies(CA-Endevor package, WAR, .NET, mobile,…) deployed in the production environment, again using the same pipeline rule to generate the job.

This means that a single job will deploy multiple technologies to multiple platforms.


Get an early start and try Clarive now. Get your custom cloud instance for free.



To conclude this blog series, let me share some criteria to evaluate different automation solution in the application delivery context. These criteria can help you in the selection process for a good delivery automation solution. 


Logic Layering
How is the automation logic layered out?
Coupling
Are the flow components tightly or loosely coupled?

Runs Backwards
If rolling back changes are needed, is a reverse flow natural or awkward?
Reusable Components
Can components and parts of the logic be easily reused or plug-and-played from one process to the next?

Entry Barrier
How hard is it to translate the real world into the underlying technology?
Easy to Implement
How hard is it to adapt to new applications and processes? What about maintenance?

Environment and Logic Separation
How independent is the logic from the environment?
Model Transition
Can it handle the evolution from one model to the other?

Massive Parallel Execution
Does the paradigm allow for splitting the automated execution into correlated parts that can run in parallel and results be joined later?
Generates Model as a Result
Does the automation know what is being changed and store the result configuration back into the database?

Handles Model Transitions
Can the system assist in evolving from one environment configuration to another?
Testable and Provable
Can the automation be validated, measured and tested using a dry-run environment and be proven correct?

Criteria Process-Driven Model-Driven Rule-Driven
Logic Layering Flowchart Model, Flowchart Decision Trees
Coupling Tight Loose Decoupled
Easy to Debug    
Runs Backwards (Rollback mode)    
Understands the underlying environment  
Understands component dependencies  
Reusable Components  
Entry Barrier Medium High Low
Easy to Migrate ✪✪✪ ✪✪✪✪
Easy to Maintain ✪✪✪ ✪✪✪✪
Environment and Logic separation  
Requires Environment Blueprints    
Handles Model Transitions    
Massive Parallel Execution (parallel by branching only) (limited by model components)
Performance ✪✪✪ ✪✪✪✪✪

Final notes

When automating complex application delivery processes, large organizations need to choose a system that is both powerful and maintainable. Once complexity is introduced, ARA systems often become cumbersome to maintain, slow to evolve and practically impossible to migrate out.

Process enterprise systems excel at automating business processes (as in BPM tools), because they do not inherently understand the underlying environment. But in application delivery and release automation in general, understanding the environment is key for component reuse and dependency management. Processs are difficult to adapt and break frequently.

Model-driven systems have a higher implementation ramp-up time since they require blueprinting of the environment before starting. Blueprinting the environment means also duplicating container metadata and other configuration management and software-defined infrastructure tools. The actions executed in model-based systems are not transparent, tend to be fragmented and require outside scripting. Finally, many release automation steps simply cannot be modeled that easy.

Rule-driven systems have a low entry barrier and are simple to maintain and extend. Automation steps are decoupled and consistent, testable and reusable. Rules can run massively in parallel, scaling well to demanding delivery pipelines. The rule-action logic is also the basis of machine-learning and many of the AI practices permeating IT nowadays.

In short, here are the key takeaways when deciding what would be the best approach to automating the delivery of application and service changes:

PROCESS
MODEL
RULE

✓ Easy to introduce
✓ Easy to model
✓ Simple to get started

✓ Hard to change
✓ Complex to orchestrate
✓ Highly reusable

✓ Not environment-aware
✓ High entry barrier
✓ Decoupled, easy to change and replace

✓ Error prone
✓ Duplication of blueprints
✓ Massively scalable

✓ Complex to navigate and grasp
✓ Leads to fragmented logic and scripting
✓ Models the environment as a result

 
✓ Not everything can or needs to be modeled
✓ Fits many use cases

Rule-driven automation is therefore highly recommended for implementing application and service delivery, environment provisioning and orchestration of tools and processes in continuous delivery pipelines. In fact, a whole new generation of tools in many domains now relies on rule-driven automation, such as:
– Run-book automation
– Auto-remediation
– Incident management
– Data-driven marketing automation
– Cloud orchestration
– Manufacturing automation and IoT orchestration
– And many more…

Release management encompasses a complex set of steps, activities, integrations and conditionals. So which paradigm should drive release management? Processs can become potentially unmanageable and detached from the environment. Models are too tied to the environment and end up requiring scripting to be able to deliver changes in the correct order.

Only rule-driven systems can deliver quick wins that perform to scale and are easy to adapt to fast-changing environments.


Get an early start and try Clarive now. Get your custom cloud instance for free.



It is remarkable how much ITIL bashing I have heard and read about since its 2011 revision was released a few years ago. 


Transforming into the digital world and with practices such as DevOps, Continuous Delivery, and Value stream mapping, many question if ITIL is still relevant today?

Of course it is!! Let me try to explain this in some detail and share my top 3 reasons why ITIL will also remain relevant in 2018 (and likely beyond as well)

Reality in the digital age is the ever-increasing customer expectation that digital and mobile services do what they need, but also that they will always be there, wherever and whenever they are needed. This impacts Dev as well as Ops.

As a result, companies are searching for and creating new innovative services for consumers, for industry and government. At the same time organizations are also continuously working on improving the structure and process for making sure that incidents, problems, service requests, and service changes are handled in the most efficient and effective way possible so that user experience and expectations are met continuously and fast. In the digital world expectation is to up 24/7.

Let’s explore this a step deeper.

IT is required and desires to deliver value to its internal or external customers (and wants to do this as fast as acceptable by them). Since ITIL v3, the value of an IT service has been defined as a combination of Utility and Warranty as the service progresses throughout its lifecycle.

Utility on the one hand is defined as the functionality offered by a product, application, or service to meet a particular need. Utility is often summarized as “what it does” or “its level of being fit for purpose”.

Warranty on the other hand provides a promise or guarantee that a product, application or service will meet its agreed requirements (“how it is done”, “its level of being fit for Use”). In digital-age wording ensuring digital and mobile services will always be there, wherever and whenever they are needed.

I read another interesting article a while ago that stated that Dev only produces 20% of the value that a service creates for its internal or external customers. That 20% is the actual functionality, or what the application does. This is the utility of the service, application, or product as explained above. The other 80% of the value of the service is created by Ops, ensures the service will be usable according to the customer’s needs, and will continue to be usable throughout its entire lifecycle. This is what ITIL calls warranty of the service.

Warranty includes availability, capacity, continuity and security of the service that must be implemented and maintained long after the deployment is finished and Dev moves on to their next project, or sprint.

So in the end, Ops has accountability for close to 80% of the actual value of the service for internal or external customers. That’s a lot!

Looking at DevOps, being a cultural and professional movement focusing on better communication, collaboration, and trust between Dev and Ops to ensure a balance between responsiveness to dynamic business requirements and stability, it looks more than natural that it is more Dev and must earn trust of Ops in this setting. If accountability is spread 80%-20%, then it is normal to me that the one that takes the highest risk, seeks the most trustworthy partner. Ops will seek stability and predictability to deliver the required warranty. To establish trust between Dev and Ops the handover between the two needs to be “trustworthy”. The way to establish this includes:

  • more transparency and accuracy in release and coding progress
  • more automation within the delivery process (the more manual activities in the delivery process, the lower the level of trust will be)
  • mutual understanding and respect of each other’s needs and expectations to be successful

Therefore, Lean IT and Value Stream Mapping, practices like Continuous Delivery and Continuous Deployment, all become a subset or a building block within a DevOps initiative.  DevOps is often an organic approach toward automating process/workflow and getting products to market more efficiently and with quality.

Often in bigger enterprises, applications or services tend to be highly interconnected. There is a desire to have a better decoupling and use of micro services, but for many this will take another decade or even longer to ultimately there (if at all). Dev teams often work and focus on individual applications or services, but in reality, these applications often interact with others within the production environment. Ops has the accountability to ensure services and applications remain available and functional at all times with quality.

This often means finding a workaround quickly at the front line so customers can continue working, assessing the overall impact of a change in production holistically, identifying failure root cause, etc. This all aligns nicely with what ITIL has been designed for: Best practices for managing, supporting, and delivering IT Services. There is no way the need for such practices will fade or become irrelevant in the near future, especially not in larger enterprises. On the contrary, with the introduction of new platforms (like public or private cloud, containers, IoT, or virtual machines, etc) we will see an increasing number of silos and teams, because often Dev team center around specific platforms.

Their deliverables form the micro services and applications of tomorrow, spread over multiple platforms. Ops need to ensure these services are of quality and delivering value to all customers. This requires discipline, communication, collaboration, tracking, planning and learning across all silos/teams…. ITIL still remain the best reference point for establishing such practices.

Big companies often with legacy in coding will only remain successful in the digital age if they find a good blend that fits for them between Agile, DevOps, ITIL and Lean IT. I am mentioning only these explicitly because they benefit a great momentum at present, but in fact, companies should explore best practices available and find the best blend that works effectively and efficiently for them, and ensure buy in from those affected.

This last aspect is key: teams need to build a common understanding of how DevOps is enabled by Agile, ITIL/ITSM, Lean and maybe other best practices.  It is not just about a tool, automation or continuous delivery but how we go about doing this that is key.  You need to promote, inspire and educate teams on how these practices can be used together to enable them and the company for success. 

To finish let me share my 3 reasons why ITIL remains valid into 2018:

1) ITIL remains providing a stable foundation/reference point in the evolving enterprise

Flexibility, elasticity and scalability remain key attributes of contemporary IT departments. Creating and maintaining this level of agility relies on having clear processes, a clear and accurate understanding of the current IT configuration and of course a good service design. The core principles of ITIL have been refined to help organizations establish these attributes within their technology systems, ensuring that there is a steady foundation for IT operations. Having this stable environment makes it easier to adjust the service management setup without running into any problems.

2) ITIL provides the required stability and value warranty within evolving enterprises

Businesses face more pressure than ever to maintain constant uptime around the clock, and all innovations in the world are useless if businesses are losing productivity because of system availability issues. ITIL continues to provide the reliability and stability needed to maximize the value of new technology strategies in today’s digital world. While organizations are in their digital transformation journey, they will have to support multi-speed, multi-risk, multi-platform environments and architectures. ITIL, under regular evolution and updating itself, continues to provide proven, common sense best practices to deliver stability in evolving, heterogeneous environments.

3) ITIL remains the de-facto reference set of best practices for IT service management (ITSM) that focuses on aligning IT services with the needs of customers

If you pick and choose, adopt and adapt what you find in ITIL you will learn that a lot of the content is “common sense”. Common sense will never go out of fashion.

Just be aware and accept that the need of and value to a customer goes beyond just the delivery of (isolated) functionality into a production environment.


Get an early start and try Clarive now. Get your custom cloud instance for free.



This video shows how Clarive can deploy War files to Tomcat, .Net applications to Windows servers and Mainframe Endevor packages with a Single Pipeline.


In the early stages, deployment is done for each technology separate(WAR, .NET, Mainframe) but for the production deployment, all 3 technologies are deployed to 3 different platforms with a single job.
All deployments are done with a SINGLE pipeline rule.

Watch the video here:

This way consistency during deployment is guaranteed.

3 Technologies(Java, .Net, Cobol programs in Endevor packages, 3 platforms (Tomcat on Linux, MS Windows server and mainframe), 3 environments(DEV, QA, PROD) deployed with jobs generated by a SINGLE pipeline rule.

The Pipeline rule can be extended with other technologies(Siebel, Sap, mobile apps), addtional environments(User acceptance, PreProd) and addtional platforms(Ios, Android)

With this Clarive is offering and end to end view on the release process, with extended dashboarding capabilities and easy tool navigation…

Get the full insight here


Get an early start and try Clarive now. Get your custom cloud instance for free.



It is incredible to see how much (and increased) attention DevOps is getting in organizations today.


As DevOps tool vendor, we welcome this of course, but at the same time it also confirms that successfully implementing DevOps within organizations is not as simple as it sometimes led to believe. The obvious question is then of course: Why?

In essence DevOps is about improved collaboration between Dev and Ops and automation of all delivery processes (made as lean as possible) for quality delivery at the speed of business. If you want to learn more about DevOps and how to implement it in bigger enterprises, take a look at the 7 Step Strategy ebook on our website.

End-to-end delivery processes can be grouped into 3 simple words: Code, Track, and Deploy.

  • Code” represents those delivery tasks and processes closely aligned with the Dev-side
  • Deploy” represents the delivery tasks and processes closely aligned with the Ops-side of the delivery chain
  • Track” is what enables improvement and better collaboration: tracking progress within the entire delivery chain. To make delivery processes lean, accurate, real-time, and factual process data is required to analyze, learn and improve.

Thinking about the delivery toolchain

Many colleagues have written about the cultural aspects of DevOps and its related challenges to implement DevOps within the organization. I concur their statements and am generally in agreement with their approach for a successful DevOps journey.

To change a culture and/or team behaviour though, especially when team members are spread globally, the organization needs to carefully think about its delivery toolchain. Why? Because a prerequisite for a culture to change, or even more basic simply for people to collaborate, is that people are able to share assets or information at real-time, regardless of location.

Reality in many organizations today is that people are grouped into teams and that each team has the freedom to choose their own tooling, based on platform support, past experience, or just preference. As a result, organizations very quickly find themselves in a situation where the delivery toolchain becomes a big set of disconnected tools for performing various code and deploy tasks. The lack of integration results in many manual tasks to “glue” it all together. Inevitably they all struggle with end-to-end tracking and a lot of time and energy is wasted on this. I often see this even within teams, because of the plenitude of tools they installed as delivery toolchain.

clarive application delivery

Clarive Lean Application Delivery

SIMPLICITY is the keyword here.

Funny enough this is a DevOps goal! Recall that DevOps is said to apply Lean and Agile principles to the delivery process. So why do teams allow product-based silos? Why do they look for tools to fix one particular delivery aspect only, like build, deploy, version control, test, etc.?

Instead of looking for (often open source) code or tools to automate a particular delivery aspect, a team or organization should look at the end-to-end delivery process, and __look for the simplest way to automate this, and importantly without manual activities!__

We believe that with workflow driven deployment teams can get code, track, and deploy automated the right way: simplified and integrated!

Workflow driven deployment will allow teams to:

  • Use discussion topics that make it simpler to manage and relate project activities: code branches are mapped 1:1 with their corresponding topic (Feature, User Story, Bugfix, etc.) making them true topic branches. This will provide strong coupling between workflow and CODE.
  • Track progress and automate deployment on every environment through kanban boards. Kanban Boards allow you to quickly visualize status of various types of topics in any arrangement. Within Clarive, Kanban topics can be easily grouped into lists, so that you can split your project in many ways. Drop kanban cards on a board simply into an environment to trigger a deployment. Simple and fully automated! This will provide strong coupling between workflow and DEPLOY automation onto every environment.
clarive kanban

The Clarive Kanban makes tracking progress easier

  • Analyze and monitor status, progress and timing within the delivery process. It even makes it possible to perform pipeline profiling. Profiling allows you to spot bottlenecks and will help you to optimize pipelines and overall workflow using execution profiling data. All data is factual and real-time! This will provide you with ultimate TRACK information within the delivery process.

The Right way to go

Why is workflow driven deployment the right way to go? Because it breaths the true objectives of DevOps: better and seamless collaboration between Dev and Ops with automation everywhere possible at the toolchain level, not only at the process/human level. This makes a big difference. I believe that a lot of companies continue to struggle with DevOps simply because they are shifting their collaboration and automation issues from their current processes and tools to a disconnected DevOps toolchain exposing similar and new problems. As a result, they become skeptical about the DevOps initiative …. And blame the toolchain for it!! (always easier than blaming people)

A DevOps platform that enables true workflow driven deployment blends process automation with execution automation and has the ability to analyze and track automation performance from start till end. This is what you should look for to enable DevOps faster: A simplified, integrated toolchain that gets the job done with transparency, so organizations can concentrate on the cultural and people related aspects.


Try Clarive now. Get your custom cloud instance for free.



This post is the first in a series of 4 covering different types of automation methods.


Organizations that want to deliver application, service and environment changes in a quick, consistent, and safe manner and with a high level of quality, invariably need a good and flexible automation system.

Choosing the correct automation method from the start can make the transition to continuous or automated delivery a lot easier.

Current tools for delivering applications in the DevOps and ARA space usually fit into one of these three automation paradigms:

  1. Process or script driven
  2. Model driven
  3. Rule driven

In the past, and in many realms of automation systems, process/script-driven approaches have prevailed. More recently, especially since the inception of the cloud and software-defined infrastructure, model-driven automation has become increasingly popular.

However, as I will explain in this blog series, both process and model-driven automation have serious drawbacks. These drawbacks tend to result in a considerable amount of rigid processes that are expensive in terms of maintenance and evolution, or simply very hard to introduce in the first place. In addition, delivery quality can be seriously impacted.

This blog post is the first of 4 that will elaborate on each paradigm individually, discussing their approach, features, and drawbacks.

Process or script-driven automation

The first paradigm I want to discuss is script or process-driven automation. It is a method based on a clearly defined, start-to-end process flow that defines how change is delivered to destination environments, typically a series of procedural steps executed to deploy the entire application.

But consider this: such an approach can become painful due to the trend that requires today’s developers to build scalable applications and apply a strategy based on hybrid clouds and platforms to attain flexibility, continuity at the lowest possible cost. With multiple platforms in mind, a process/script based solution means you need a unique process for each platform, cloud, application, and/or environment.

Since scripts are unique to each combination of process, environment, and app, they are called tightly coupled. As a result, deployment processes may need to be rewritten several times throughout the lifecycle of the application — any time the application, middleware, or components change.

Another weakness of the process-driven approach is that it does not offer the opportunity to align teams around the same toolset. In fact, it encourages individual teams to maintain their own catalogue of custom-built scripts, in their technologies of choice, and prevents the rest of the organization from benefiting from their investments. This is the worst possible way to implement DevOps practices and instate lean principles within application delivery as it nurtures silos and obstructs collaboration and sharing – while often making the delivery toolchain unnecessarily complex.

Coding: an analogy

Let’s make an analogy within software development itself to make this clearer.

Process or script driven deployment is in a way analogous to coding using a particular programming language or IDE. Since each development team writes code in a different language, it does not offer the opportunity to align teams around the same code set or make it easy to share code across teams.

Although in recent times programming languages have become much more platform neutral, recall that in the old days, languages were very bound to platforms and machines as well as they included specific coding instruction sets closely supporting the platform or machine they supported. This also resulted in issues with sharing or migration of code across platforms. This is very similar to the issues I see with scripts/processes for deployment.

So, in summary, a process is defined as a series of steps that contain information about the environment, the changes being introduced and the deployment logic to implement the change process for the affected platform(s).

Process-driven automation

The main features of process-driven automation include:

  • Directly represents how a sequence of execution steps need to be performed.
  • Represents both human actions and machine integration processes.
  • A process represents how the environment changes, but does not understand the environment layout.
  • They are tightly coupled.
  • Logical steps contain hard-coded information about settings, context, and environment, that make it difficult to abstract.

Today many processes are implemented using software-based flowchart diagrams like the one below:

Flowcharts are very well known from computer science textbooks. They were initially meant to represent control/sequencing logic at a conceptual stage. Its original use was not intended for execution or automation, certainly not in the context of day-to-day automation of the complex delivery logic that can be found in most IT departments within decent sized organizations.

Especially sequencing delivery logic drives delivery complexity. Why? Because the sequence “depends” on many different events that occur during the process. Representing all options up-front can be challenging. Often as a result, the process charts become unwieldy to read and oversee when the complexity of the delivery process is high, which is often the case when there are a lot of application interdependencies.

Simple but risky

To conclude, process or script driven automation is often perceived be simple, especially if the automation tool is very closely aligned with the platforms it supports (same as with a programming language in the analogy I used) and if delivery complexity is low. The approach is very well appreciated by developers because it gives them a solution and the power very similar to what they experience within their development environments: autonomy and control being the most important ones.

The biggest challenge I repetitively hear about is its tight coupling, resulting in many different “clones” of the same script/process being used for deployment onto different environment or for different applications etc. Unless there is a strong governance process supporting the change and use of such processes, the risk can be very high that what was put into production is not entirely the same as what was tested in UAT or QA… Not sure business users and product owners will appreciate that thought.

In my next blog session, I will take closer look at model-driven automation, a more recent alternative to handle deployments with more opportunity for reuse and far less coupling.


Read next:

Try Clarive now. Get your custom cloud instance for free.



Managing and deploying changes to your Salesforce platform with Clarive.


At first, we create our Changeset in Clarive to associate our git’s commit and follow it through our DevOps Cycle.

The job is generated by a versioned pipeline rule. In all of the video’s for all of the technologies (mainframe cobol, CA-Endevor package, .Net, Mobile apps) the same rule is invoked for all of the environments (DEV, QA, PreProd, PROD).

For any of the environments the application needs to be deployed to, the same pipeline rule will be used, assuring a very consistent way of deploying.


Get an early start and try Clarive now. Get your custom cloud instance for free.