In this installment of this series we will review how Clarive drive popular z/OS SCM tools such as CA Endevor or Serena ChangeMan as part of your global DevOps pipeline.

Source code versioned in z/OS (Endevor or Changeman)

In this scenario, the source code to be deployed resides in the mainframe.

Selection of objects to deploy

Clarive will allow the development user to attach z/OS packages (Endevor orChangeMan) to the changesets for further processing.

z/OS packages

z/OS packages


z/OS repository

z/OS repository (Changeman ZMF)


Preparing Payload Elements

Clarive rules will define the logic to prepare the z/OS packages by executing either online or batch operations that are needed to prepare them (freeze,inspect, generate, execute, etc.)

Inspect package operation

Inspect package operation (Endevor)


Deployment of the elements

The deployment of the packages will be executed by Clarive by submitting the specific JCL to ship/promote/approve the packages included in the job changesets.

Ship package

Ship package (Endevor)


Endevor Ship package

Endevor Ship package sample output



Clarive rule will allow the administrator to define the way to rollback changes. In either Endevor and ChangeMan ZMF it will execute a Backout operation on each package included in the job.

Backout package

Backout package (Endevor)



Endevor and Changeman are powerful version control tools used in mainframe environments. Fundamentally, they have similar or equivalent workflows for the software configuration lifecycle but often these tools are used independently of the overall enterprise DevOps pipeline. DevOps implementations often leave them out, however they remain critical to deliver and run the critical areas of the business with much newer technologies.

With its mainframe orchestration capabilities, Clarive enables organizations with either tool to build better integrated pipelines that brings mainframe changes into the main Dev to Ops stream.

Stay tuned for the forth and final installment of these series!

Read the previous post of this series and learn about the mainframe features you can find in Clarive and how they are integrated into the system.

This release contains a lot of minor fixes and improvements from 7.0.12. It is also focus on refactoring interface improving the kanban boards.

Git repositories navigation on a tab

In Clarive 7.0.13 you can find a new Git repository navigation panel completely refactored. You can view sources, navigate branches and tags, compare references and much more.

To access the new interface, just navigate to the project in the left panel, expand it and click on the repository node.

Repository Navigation

Load default data by profile

Now any Clarive profile (a profile is a predefined set of topic categories, rules and roles that can be loaded in Clarive) can include default data as part of it.

ClariveSE profile now includes a sample-html project and two releases with several changes on them. It also automates the launch of 3 deployment jobs to INTE, TEST, and PROD.

To get the profile and the default sample data installed, execute cla setup <profile> and answer yes to the question Load default data?. Once you start the Clarive server it will automatically load the profile and the default data.


Kanban Board improvements

Custom card layout

You can now configure the layout of the cards of your Kanban Boards to show the information that you really want to focus on. To configure the layout, go to the board Configuration and select Cards Layout.

Cards Layout

Auto refresh

In the Quick View options panel (click on View button), now you’ll find a switch to toggle the Auto Refresh for this board. It will be updated with changes in the topics shown whenever the board tab is activated.

Auto refresh

Save quick view by user

In Clarive 7.0.13 the options selected in the quick view menu will be saved locally in your browser storage so every time you open the board it will use the last swimlanes, autorefresh, cards per list, etc. configuration you used.

Predefined statuses by list

Whenever you create a new board, it will be created with three default lists and now it will assign default statuses to these three lists with the following rules:

  • New: Initial statuses
  • In Progress: Normal statuses
  • Done: Final and Cancelled statuses

Killtree when job is cancelled

One of the most important improvements of Clarive 7.0.13 is the ability to kill/cancel the remote processes being executed by a job when this is canceled from the interface.

Auto Important

You can read about this new feature in this blog post

Improvements and issues resolved

  • [ENH] Git repositories navigation on a tab
  • [ENH] Clax libuv adaptation
  • [ENH] NPM registry directory new structure
  • [ENH] Add rulebook documentation to service.artifacts.publish
  • [ENH] Return artifact url on publish
  • [ENH] Invite users to Clarive
  • [ENH] Load default data by profile
  • [ENH] Users can choose shell runner for rulebooks
  • [ENH] Kill job signal configured in yml file
  • [ENH] Add default workers configuration to clarive.yml file
  • [ENH] Boards shared with “ALL” users
  • [ENH] Kanban custom card fields
  • [ENH] Killtree when job is cancelled
  • [ENH] Kanabn boards auto refresh
  • [ENH] Make sure to save kanban quick view session
  • [ENH] Filter data according to filter field in Topic Selector fieldlet
  • [ENH] Make sure new created boards have default lists
  • [ENH] Add date fields to card layout configuration
  • [FIX] Check user permissions in service.topic.remove_file
  • [FIX] Make sure user with permissions can access to rule designer
  • [FIX] Make sure CI permissions are working correctly
  • [FIX] Make sure that the ci grid is updated after the ci is modified
  • [FIX] Control exception when running scripts.
  • [FIX] Change project_security structure on user ci
  • [FIX] User without project field permissions can edit the topic
  • [FIX] Make sure React apps work in IE 11
  • [FIX] Show cis in create menu (standard edition)
  • [FIX] Administrator should be able to delete artifacts in ClariveSE
  • [FIX] When publishing NPM packages with scopes tarball is empty
  • [FIX] Make sure default values from variables are used when adding them
  • [FIX] Make sure notifications are sent only to active users
  • [FIX] Make sure to show username in “Blame by time” option for rules versions
  • [FIX] Remove default values when changing type of variable resource
  • [FIX] Allow single mode in variables resources
  • [FIX] Escape “/” in URLs for NPM scoped packages from remote repositories
  • [FIX] Avoid console message when opening a variable resource with cis set as default values
  • [FIX] Regexp for scoped packages should filter ONLY packages, not tgzs
  • [FIX] Refresh resources from url
  • [FIX] Create resource from versioned tab
  • [FIX] Make sure remote script element always display a final message
  • [FIX] Save variable when deleted default value field in a variable resource
  • [FIX] Make sure topic’s hidden fields are available as topicfields bounds
  • [FIX] Save resource when it does not have to validate fields
  • [FIX] Make sure projects can be added as kanban swimlanes
  • [FIX] Make sure changeset with artifact revision attached can be opened
  • [FIX] Make sure narrow menu repository navigation show changes related to branch
  • [FIX] Formating event data if fail service used
  • [FIX] Make sure that the chosen element is always selected in the rule tree.
  • [FIX] Reload data resource when refreshing
  • [FIX] Job distribution and las jobs dashlets should filter assigned projects to user
  • [FIX] Make sure user combo not have avaible grid mode in topic.
  • [FIX] Make sure that system user are showed in combo users
  • [FIX] Display column data in edition mode for a Topic Selector fieldlet in a topic
  • [FIX] Filter projects in grids by user security
  • [FIX] Make sure in topic selector combo all height size are available
  • [FIX] Ship remote file: show log in several lines
  • [FIX] Skip job dir removal in rollback
  • [FIX] Remove FilesysRepo Resource
  • [FIX] Remove permissions option from user menu
  • [FIX] Make sure when maximized description and choose back in the browser screen layout are showed well
  • [FIX] Remove session when user get deactivated
  • [FIX] Resources concurrency
  • [FIX] Validate CI Multiple option just with type ci variables
  • [FIX] Resource not saved when validation fails
  • [FIX] Make sure that the combos search has an optimal performance.
  • [FIX] Make sure ldap authentication returned messages are available in stash
  • [FIX] Show date and time in fieldlet datetime
  • [FIX] User session should not be removed on REPL open
  • [FIX] User with action.admin.users should be able to edit users
  • [FIX] Make username available in dashboard rules execution
  • [FIX] Make sure collapsing lists saved in user session correctly

Ready to upgrade?

Just follow the standard procedure for installing the new version. Click here to get it from our Install page.


Join us in our Community to make suggestions and report bugs.

Thanks to everyone who participated there.

Try Clarive now and start improving your DevOps practices.

Enterprises are in constant search of ways to deliver faster, ideally in a continuous/frequent fashion, with full traceability and control, and of course with excellent business quality.

DevOps as an approach continues to get a lot of attention and support in achieving these goals.

As readers can find in other blogs and articles on DevOps, DevOps aims at bringing Development and Operations closer together, allowing better collaboration between them, and facilitating a smoother handover during the delivery process. Automation remains a critical component in the technical implementation.

What strikes me all the time is that, when I discuss the subject with customers, analysts, and other colleagues in the field, very quickly we seem to end up in a DevOps toolchain discussion that gets cluttered by numerous best of bread but point products that one way or another need to integrate or at least work together to get the delivery job “done”.

Why is that? Why does a majority end up with (too) many tools within the delivery toolchain?

We fail to search for simplicity

If you read about what analyst like Gartner or Forrester are writing about implementing DevOps, if you read closer about what Lean IT stands for, then a common theme that will surface is SIMPLICITY.

If you want to enhance collaboration between delivery stakeholders, if you want to make the handover of deliverables easier, if you want to automate the end-to-end delivery process, then you should look for ways to make your delivery toolchain simpler, not more complex.

As part of the analysis and continual improvement process of the delivery value stream, we look for better ways to do specific tasks. We should in addition carefully look at alternatives to avoid manual activities in processes when possible. This is just applying common Lean practices in the context of application delivery.

Many (bigger) enterprises remain overly siloed, and this often results in suboptimal improvement cycles. When developers face issues with the build process, they look on the web for better build support for their specific platform, ideally in open source, so they can “tweak” it for their needs if required (it is often a matter of retained “control”). If quality suffers, developers and testers can do their own quest to improve quality from their viewpoint, leading to the selection and usage of specific point products by each team, sometimes not even aware of their respective choice.

I can continue with more examples in the same trend, but the pattern is obvious: When teams continue to look for the best solution “within their silo”, then most of the time organizations will end up in an overly complex and tool rich delivery toolchain.

Look at delivery in a holistic way, from a business perspective

The above approach is not respecting some important Lean principles though: Look at the value stream from a customer’s perspective, in a holistic way, creating flow while eliminating waste.

These are some of the things you should look at while analysing and improving your delivery toolchain:

  • How does demand/change flow into the process? How is the selection/acceptance process handled? How is delivery progress tracked?

  • How automated are individual delivery steps (like build, provision, test, deploy)? How is the delivery process/chain automated itself? Any manual activities happening? Why? Does automation cover across ALL platforms, or only a subset?

In case you would like to learn around this subject, I can recommend reading the following ebook on the Clarive website: “Practical Assessment Guide for DevOps readiness within a hybrid enterprise

Clarive CLEAN stack

A C.L.E.A.N way to deliver quality

At Clarive we believe simplicity is vital for sustained DevOps and delivery success.
We designed the C.L.E.A.N stack exactly with this in mind:

Clarive Lean & Effective Automation requiring Nothing else for successful delivery.

Indeed, Clarive allows you to:

  • Implement Lean principles and accurate measurement and reporting with real-time and end-to-end insight

  • Implement effective and pragmatic automation of both delivery processes as well as delivery execution steps such as build, provision, test, and deploy.

  • Do all this from within the same product, so there is no need to use anything else to get the job done!! No real need to implement artefact repositories, workflow tools, or anything else, just Clarive will do!.

Of course, in case you have made investment in tooling already, Clarive will collaborate in a bi-directional way to get you started quickly. After all, DMAIC or other improvement cycles are cyclic and continual, so you can further refine or improve after you got started if you desire more simplicity…

This is an evolution I have seen many of our clients going through: they initially look at and start with Clarive because they have certain automation or orchestration needs. Then they find out they can do with Clarive what they did with Jenkins, and switch to Clarive, then they learn about Clarive’s CI repository and decide to eliminate Nexus. As Clarive has a powerful and integrated workflow automation capability, they realise they could also do without Jira and Bitbucket… and so on. It has saved companies effort and cost doing so.

In case you are interested in Clarive, start with the 30-day trial.

See also some sample screenshots of the tool below.

Clarive tool

Clarive tool_screenshot

Clarive tool_screenshot_deploy package

Visit our documentation to learn more about the features of Clarive.

We are glad to present our new branching model Git flow based available for Clarive 7.

This new Git flow enables you and your team to:

  • Revert features
  • Deploy different releases progressively to environment
  • Maintain simultaneous live releases
  • Isolate feature groups into different release branches.

This presentation explains the problem it solves and how it can improve your day-to-day workflow tracking changes and delivering applications.


Visit our documentation to learn more about the features of Clarive.

We’re pleased to present our new release Clarive 7.0.12. This release contains a variety of minor fixes and improvements from 7.0.11. It is focused on refactoring interface.

NPM Artifact Repository management

Clarive team is proud to release this version with artifact repository enhancement. This new functionality allows NPM packages management.

  • Now is possible to surf the NPM repository folders through the artifacts interface, content visualization and distinguishing the new packages that have been included in the repository.

Create artifacts tags in order to sort them out

  • Use Clarive NPM repositories that serve as proxy to the global NPM store in or just use them as local, so you can control which public packages are available for your developers
npm install angularjs --registry http(s):///artifacts/repo/
  • Use Clarive Groups of repositories to categorize packages and access several local repositories with just one registry
npm install angularjs --registry http(s)://<clarive_url>/artifacts/repo/<npm_repo_group>
  • Directly publish in Clarive NPM repositories with npm publish command
npm publish ./ --registry http(s):///artifacts/repo/
  • You can also publish packages through rulebooks
- publish:
repository: '' # repository name
from: ''
to: ''

Take a look to our docs website and learn how to configure your artifact repository in Clarive

NPM repository events exists in Clarive. So, for example, when the *npm publish* command is executed in one repository, the artifact will be published in Clarive, sending a notification email to your team. For more information go to our documentation and learn all you can do with events.

Improvements and issues resolved

  • [ENH] – Project menu revamp
  • [ENH] – Plugins code structure and formating
  • [ENH] – Owner can cancel and restart jobs
  • [ENH] – Interface plugins standarization
  • [FIX] – Docker images cache management
  • [FIX] – Show subtask editable grid only during edition
  • [FIX] – Differentiate environments and variables in menu

Ready to upgrade?

Just follow the standard procedure for installing the new version. Click here to get it from our Install page.


Join us in our Community to make suggestions and report bugs.

Thanks to everyone who participated there.

Get an early start and try Clarive now. Install your 30-day trial here.

To conclude this blog series, let me share some criteria to evaluate different automation solution in the application delivery context. These criteria can help you in the selection process for a good delivery automation solution. 

Logic Layering
How is the automation logic layered out?
Are the flow components tightly or loosely coupled?

Runs Backwards
If rolling back changes are needed, is a reverse flow natural or awkward?
Reusable Components
Can components and parts of the logic be easily reused or plug-and-played from one process to the next?

Entry Barrier
How hard is it to translate the real world into the underlying technology?
Easy to Implement
How hard is it to adapt to new applications and processes? What about maintenance?

Environment and Logic Separation
How independent is the logic from the environment?
Model Transition
Can it handle the evolution from one model to the other?

Massive Parallel Execution
Does the paradigm allow for splitting the automated execution into correlated parts that can run in parallel and results be joined later?
Generates Model as a Result
Does the automation know what is being changed and store the result configuration back into the database?

Handles Model Transitions
Can the system assist in evolving from one environment configuration to another?
Testable and Provable
Can the automation be validated, measured and tested using a dry-run environment and be proven correct?

Criteria Process-Driven Model-Driven Rule-Driven
Logic Layering Flowchart Model, Flowchart Decision Trees
Coupling Tight Loose Decoupled
Easy to Debug    
Runs Backwards (Rollback mode)    
Understands the underlying environment  
Understands component dependencies  
Reusable Components  
Entry Barrier Medium High Low
Easy to Migrate ✪✪✪ ✪✪✪✪
Easy to Maintain ✪✪✪ ✪✪✪✪
Environment and Logic separation  
Requires Environment Blueprints    
Handles Model Transitions    
Massive Parallel Execution (parallel by branching only) (limited by model components)
Performance ✪✪✪ ✪✪✪✪✪

Final notes

When automating complex application delivery processes, large organizations need to choose a system that is both powerful and maintainable. Once complexity is introduced, ARA systems often become cumbersome to maintain, slow to evolve and practically impossible to migrate out.

Process enterprise systems excel at automating business processes (as in BPM tools), because they do not inherently understand the underlying environment. But in application delivery and release automation in general, understanding the environment is key for component reuse and dependency management. Processs are difficult to adapt and break frequently.

Model-driven systems have a higher implementation ramp-up time since they require blueprinting of the environment before starting. Blueprinting the environment means also duplicating container metadata and other configuration management and software-defined infrastructure tools. The actions executed in model-based systems are not transparent, tend to be fragmented and require outside scripting. Finally, many release automation steps simply cannot be modeled that easy.

Rule-driven systems have a low entry barrier and are simple to maintain and extend. Automation steps are decoupled and consistent, testable and reusable. Rules can run massively in parallel, scaling well to demanding delivery pipelines. The rule-action logic is also the basis of machine-learning and many of the AI practices permeating IT nowadays.

In short, here are the key takeaways when deciding what would be the best approach to automating the delivery of application and service changes:


✓ Easy to introduce
✓ Easy to model
✓ Simple to get started

✓ Hard to change
✓ Complex to orchestrate
✓ Highly reusable

✓ Not environment-aware
✓ High entry barrier
✓ Decoupled, easy to change and replace

✓ Error prone
✓ Duplication of blueprints
✓ Massively scalable

✓ Complex to navigate and grasp
✓ Leads to fragmented logic and scripting
✓ Models the environment as a result

✓ Not everything can or needs to be modeled
✓ Fits many use cases

Rule-driven automation is therefore highly recommended for implementing application and service delivery, environment provisioning and orchestration of tools and processes in continuous delivery pipelines. In fact, a whole new generation of tools in many domains now relies on rule-driven automation, such as:
– Run-book automation
– Auto-remediation
– Incident management
– Data-driven marketing automation
– Cloud orchestration
– Manufacturing automation and IoT orchestration
– And many more…

Release management encompasses a complex set of steps, activities, integrations and conditionals. So which paradigm should drive release management? Processs can become potentially unmanageable and detached from the environment. Models are too tied to the environment and end up requiring scripting to be able to deliver changes in the correct order.

Only rule-driven systems can deliver quick wins that perform to scale and are easy to adapt to fast-changing environments.

Get an early start and try Clarive now. Install your 30-day trial here.

In this post we are going to explain how to make application deployments using Clarive EE both to the Google Play Store, and to the iOS store (Apple Store) thanks to the Clarive Plugins

This process will not require any additional programming knowledge. Thanks to the interface offered by the rules designer we’ll be able to configure these deployments.

For more information about Clarive and the components we use, see our Docs.

Deploying to Google Play Store
General features

In the case of Android applications, the Gradle plugin is used to compile the application, and the Fastlane plugin to send it to the Play Store automatically.


Beforehand, we need to ensure that there is already a first version of the application uploaded so that the deployment can be carried out automatically.

We then create a Generic Server from the Resources → Server panel. On this server we have the application installed together with Gradle and Fastlane.

Once we have configured the server, we need to go to the rules designer and create a new rule with the type Pipeline in the Admin panel.

We then use the PRE phase to compile the application. To do this, we drag the operation Gradle compile:

In the set up box that appears, we should select the server that was previously configured, and fill out the remaining fields in order for it to compile correctly.

We then need to drag the operation Fastlane task to the RUN phase in order to configure the upload of the application to the Play Store:

We then fill in the operation form with all the necessary data ready for the upload.

We are now ready for the deployment to Play Store. Next we need to add the operations to deploy to Apple Store using the same Pipeline.

Deploying to Apple Store
General features

This time we’ll use the Fastlane plugin to compile and send the application to Apple Store automatically. We need to ensure that Xcode is installed together with Fastlane.


We now need to create a Generic Server, just like we did previously. On this server we have the application installed together with Fastlane and Xcode.

We must also configure the access credentials for our Apple Store account in the Resource iOSCredentials, which we access from the Resources → iOS panel.

In the same way as we did previously with Android, we’ll compile our application in the PRE phase, with theFastlane task operation.

Within the configuration of the operation, we need to select the server, the credentials, and fill in the rest of the necessary fields.

We then configure the delivery of the application to the Apple Store, using the same Fastlane task operation in the RUN phase.

We need to select the ‘Upload App’ option, and fill in the fields.

With this Pipeline we have now set up automatic publication both in the Play Store and the Apple Store.
If you have any questions please don’t hesitate to contact us at Clarive Community.

Get an early start and try Clarive now. Install your 30-day trial here.