This video shows how Clarive Enterprise Edition (EE) can deploy War files to Tomcat, .Net applications to Windows servers and Mainframe Endevor packages with a Single Pipeline.


In the early stages, deployment is done for each technology separate(WAR, .NET, Mainframe) but for the production deployment, all 3 technologies are deployed to 3 different platforms with a single job.
All deployments are done with a SINGLE pipeline rule.

Watch the video here

This way consistency during deployment is guaranteed.

3 Technologies(Java, .Net, Cobol programs in Endevor packages, 3 platforms (Tomcat on Linux, MS Windows server and mainframe), 3 environments(DEV, QA, PROD) deployed with jobs generated by a SINGLE pipeline rule.

The Pipeline rule can be extended with other technologies(Siebel, Sap, mobile apps), addtional environments(User acceptance, PreProd) and addtional platforms(Ios, Android)

With this Clarive EE is offering and end to end view on the release process, with extended dashboarding capabilities and easy tool navigation…

Get the full insight here


Get an early start and try Clarive now. Get your custom cloud instance for free.



It is incredible to see how much (and increased) attention DevOps is getting in organizations today.


As DevOps tool vendor, we welcome this of course, but at the same time it also confirms that successfully implementing DevOps within organizations is not as simple as it sometimes led to believe. The obvious question is then of course: Why?

In essence DevOps is about improved collaboration between Dev and Ops and automation of all delivery processes (made as lean as possible) for quality delivery at the speed of business. If you want to learn more about DevOps and how to implement it in bigger enterprises, take a look at the 7 Step Strategy ebook on our website.

End-to-end delivery processes can be grouped into 3 simple words: Code, Track, and Deploy.

  • Code” represents those delivery tasks and processes closely aligned with the Dev-side
  • Deploy” represents the delivery tasks and processes closely aligned with the Ops-side of the delivery chain
  • Track” is what enables improvement and better collaboration: tracking progress within the entire delivery chain. To make delivery processes lean, accurate, real-time, and factual process data is required to analyze, learn and improve.

Thinking about the delivery toolchain

Many colleagues have written about the cultural aspects of DevOps and its related challenges to implement DevOps within the organization. I concur their statements and am generally in agreement with their approach for a successful DevOps journey.

To change a culture and/or team behaviour though, especially when team members are spread globally, the organization needs to carefully think about its delivery toolchain. Why? Because a prerequisite for a culture to change, or even more basic simply for people to collaborate, is that people are able to share assets or information at real-time, regardless of location.

Reality in many organizations today is that people are grouped into teams and that each team has the freedom to choose their own tooling, based on platform support, past experience, or just preference. As a result, organizations very quickly find themselves in a situation where the delivery toolchain becomes a big set of disconnected tools for performing various code and deploy tasks. The lack of integration results in many manual tasks to “glue” it all together. Inevitably they all struggle with end-to-end tracking and a lot of time and energy is wasted on this. I often see this even within teams, because of the plenitude of tools they installed as delivery toolchain.

clarive application delivery

Clarive Lean Application Delivery

SIMPLICITY is the keyword here.

Funny enough this is a DevOps goal! Recall that DevOps is said to apply Lean and Agile principles to the delivery process. So why do teams allow product-based silos? Why do they look for tools to fix one particular delivery aspect only, like build, deploy, version control, test, etc.?

Instead of looking for (often open source) code or tools to automate a particular delivery aspect, a team or organization should look at the end-to-end delivery process, and __look for the simplest way to automate this, and importantly without manual activities!__

We believe that with workflow driven deployment teams can get code, track, and deploy automated the right way: simplified and integrated!

Workflow driven deployment will allow teams to:

  • Use discussion topics that make it simpler to manage and relate project activities: code branches are mapped 1:1 with their corresponding topic (Feature, User Story, Bugfix, etc.) making them true topic branches. This will provide strong coupling between workflow and CODE.
  • Track progress and automate deployment on every environment through kanban boards. Kanban Boards allow you to quickly visualize status of various types of topics in any arrangement. Within Clarive, Kanban topics can be easily grouped into lists, so that you can split your project in many ways. Drop kanban cards on a board simply into an environment to trigger a deployment. Simple and fully automated! This will provide strong coupling between workflow and DEPLOY automation onto every environment.
clarive kanban

The Clarive Kanban makes tracking progress easier

  • Analyze and monitor status, progress and timing within the delivery process. It even makes it possible to perform pipeline profiling. Profiling allows you to spot bottlenecks and will help you to optimize pipelines and overall workflow using execution profiling data. All data is factual and real-time! This will provide you with ultimate TRACK information within the delivery process.

The Right way to go

Why is workflow driven deployment the right way to go? Because it breaths the true objectives of DevOps: better and seamless collaboration between Dev and Ops with automation everywhere possible at the toolchain level, not only at the process/human level. This makes a big difference. I believe that a lot of companies continue to struggle with DevOps simply because they are shifting their collaboration and automation issues from their current processes and tools to a disconnected DevOps toolchain exposing similar and new problems. As a result, they become skeptical about the DevOps initiative …. And blame the toolchain for it!! (always easier than blaming people)

A DevOps platform that enables true workflow driven deployment blends process automation with execution automation and has the ability to analyze and track automation performance from start till end. This is what you should look for to enable DevOps faster: A simplified, integrated toolchain that gets the job done with transparency, so organizations can concentrate on the cultural and people related aspects.


Try Clarive now. Get your custom cloud instance for free.



This post is the first in a series of 4 covering different types of automation methods.


Organizations that want to deliver application, service and environment changes in a quick, consistent, and safe manner and with a high level of quality, invariably need a good and flexible automation system.

Choosing the correct automation method from the start can make the transition to continuous or automated delivery a lot easier.

Current tools for delivering applications in the DevOps and ARA space usually fit into one of these three automation paradigms:

  1. Process or script driven
  2. Model driven
  3. Rule driven

In the past, and in many realms of automation systems, process/script-driven approaches have prevailed. More recently, especially since the inception of the cloud and software-defined infrastructure, model-driven automation has become increasingly popular.

However, as I will explain in this blog series, both process and model-driven automation have serious drawbacks. These drawbacks tend to result in a considerable amount of rigid processes that are expensive in terms of maintenance and evolution, or simply very hard to introduce in the first place. In addition, delivery quality can be seriously impacted.

This blog post is the first of 4 that will elaborate on each paradigm individually, discussing their approach, features, and drawbacks.

Process or script-driven automation

The first paradigm I want to discuss is script or process-driven automation. It is a method based on a clearly defined, start-to-end process flow that defines how change is delivered to destination environments, typically a series of procedural steps executed to deploy the entire application.

But consider this: such an approach can become painful due to the trend that requires today’s developers to build scalable applications and apply a strategy based on hybrid clouds and platforms to attain flexibility, continuity at the lowest possible cost. With multiple platforms in mind, a process/script based solution means you need a unique process for each platform, cloud, application, and/or environment.

Since scripts are unique to each combination of process, environment, and app, they are called tightly coupled. As a result, deployment processes may need to be rewritten several times throughout the lifecycle of the application — any time the application, middleware, or components change.

Another weakness of the process-driven approach is that it does not offer the opportunity to align teams around the same toolset. In fact, it encourages individual teams to maintain their own catalogue of custom-built scripts, in their technologies of choice, and prevents the rest of the organization from benefiting from their investments. This is the worst possible way to implement DevOps practices and instate lean principles within application delivery as it nurtures silos and obstructs collaboration and sharing – while often making the delivery toolchain unnecessarily complex.

Coding: an analogy

Let’s make an analogy within software development itself to make this clearer.

Process or script driven deployment is in a way analogous to coding using a particular programming language or IDE. Since each development team writes code in a different language, it does not offer the opportunity to align teams around the same code set or make it easy to share code across teams.

Although in recent times programming languages have become much more platform neutral, recall that in the old days, languages were very bound to platforms and machines as well as they included specific coding instruction sets closely supporting the platform or machine they supported. This also resulted in issues with sharing or migration of code across platforms. This is very similar to the issues I see with scripts/processes for deployment.

So, in summary, a process is defined as a series of steps that contain information about the environment, the changes being introduced and the deployment logic to implement the change process for the affected platform(s).

Process-driven automation

The main features of process-driven automation include:

  • Directly represents how a sequence of execution steps need to be performed.
  • Represents both human actions and machine integration processes.
  • A process represents how the environment changes, but does not understand the environment layout.
  • They are tightly coupled.
  • Logical steps contain hard-coded information about settings, context, and environment, that make it difficult to abstract.

Today many processes are implemented using software-based flowchart diagrams like the one below:

Flowcharts are very well known from computer science textbooks. They were initially meant to represent control/sequencing logic at a conceptual stage. Its original use was not intended for execution or automation, certainly not in the context of day-to-day automation of the complex delivery logic that can be found in most IT departments within decent sized organizations.

Especially sequencing delivery logic drives delivery complexity. Why? Because the sequence “depends” on many different events that occur during the process. Representing all options up-front can be challenging. Often as a result, the process charts become unwieldy to read and oversee when the complexity of the delivery process is high, which is often the case when there are a lot of application interdependencies.

Simple but risky

To conclude, process or script driven automation is often perceived be simple, especially if the automation tool is very closely aligned with the platforms it supports (same as with a programming language in the analogy I used) and if delivery complexity is low. The approach is very well appreciated by developers because it gives them a solution and the power very similar to what they experience within their development environments: autonomy and control being the most important ones.

The biggest challenge I repetitively hear about is its tight coupling, resulting in many different “clones” of the same script/process being used for deployment onto different environment or for different applications etc. Unless there is a strong governance process supporting the change and use of such processes, the risk can be very high that what was put into production is not entirely the same as what was tested in UAT or QA… Not sure business users and product owners will appreciate that thought.

In my next blog session, I will take closer look at model-driven automation, a more recent alternative to handle deployments with more opportunity for reuse and far less coupling.


Read next:

Try Clarive now. Get your custom cloud instance for free.