In this last installment of this series, we will review how Clarive can replace z/OS SCM tools such as CA Endeavor or Serena ChangeMan with a global DevOps pipeline that can drive unified deployments across all platforms.

Source code versioned and deployed by Clarive

Clarive can deploy source code managed outside the mainframe.

Selecting elements to deploy

In this article, z/OS artifacts (programs, copy books, JCLs, SQLs, etc.) can be versioned in Clarive’s Git, but it could be done with any other VCS for that matter. The developer will select the versions of elements to deploy from the repository view attaching it to the Clarive changeset.

Versions associated to changesets

Versions associated to changesets


Preparing mainframe elements

Clarive will checkout the selected version of the source code to deploy in the PRE step of the deployment job and will perform the needed activities to check the code quality (i.e. execute static code analysis, check vulnerabilities, etc.) and to identify the type of compilation to be executed (i.e. decide the type of item depending on the naming convention, parse the source code to decide if DB2 precompilation is needed, etc.).

Depending on the elements to deploy, different actions will be executed:

  • Copy books, JCLs and all other elements that don’t need compilation will be shipped to the destination PDSs

  • Programs will be precompiled and compiled as needed and the binaries will be kept in temporary load PDSs

Clarive rule will decide what JCL template will be used to prepare/deploy each type of element and will submit the JCL after replacing the variables with their actual values depending on the deployment project and environment.

Different z/OS element natures

Different z/OS element natures


Deploying elements

Depending on the elements to deploy, different actions will be executed:

  • Programs will be shipped to the destination PDSs and binded as needed.

A Clarive rule will decide what JCL template will be used to deploy each type of element and will submit the JCL after replacing the variables with their actual values depending on the deployment project and environment.

Deploy and bind example

Deploy and bind examples

As usual, Clarive will keep track of any nested JCL jobs that may run associated with the parent JCL.


Clarive will start a rollback job whenever an error condition occurs in the rule execution. It will automatically check out and deploy the previous version of the elements available in the source repository.

Conclusion and the next steps

In this DevOps for the Mainframe series, we have exposed the key features of Clarive for bringing mainframe technologies into the full, enterprise-wide continuous delivery DevOps pipeline.

Once an organization has decided to modernize mainframe application delivery, there is a set of recommended steps:

Establish Prerequisites

The first step IT leaders need to take before modernizing mainframe application delivery is to evaluate whether the correct prerequisites are in place or in progress. To successfully implement a mainframe application delivery tool like Clarive requires either an existing process or the will to implement one.

Assess Operational Readiness

Many organizations discover too late that they have underestimated— sometimes dramatically—the investment needed in people, processes, and technology to move from their current environment for modernizing mainframe application delivery. The early readiness assessment is essential to crafting a transition plan that minimizes risk and provides cross-organizational visibility and coordination for the organization’s cloud initiatives. Many organizations already have some sort of mainframe delivery tools in place.

When key processes have been defined within such a framework, optimizing and transforming them to an enterprise-wide delivery is significantly easier, but still need to be integrated into a single Dev to Ops pipeline, as mainframe delivery requests typically tend to run outside the reach of release composition and execution.

Prepare the IT Organization for Change

This concludes our blog series on deploying to the mainframe.

IT leaders should test the waters to see how ready their own organization is for the change the way the mainframe application delivery processes fit into the picture. IT managers must communicate clearly to staff the rationale for the change and provide visibility into the impact on individual job responsibilities. It is particularly important that managers discuss any planned reallocation of staff based on reductions in troubleshooting time to alleviate fears of staff reductions.

Mainframe aspects

In this series we reviewed many different aspects for fully bringing your mainframe system up to speed with your enterprise DevOps strategy:

  • Define the critical capabilities and tooling requirements to automate your mainframe delivery pipeline.

  • Decide where your code will reside and who (Clarive or a mainframe tool) will drive the pipeline build and deploy steps.

  • Integrate the pipeline with other functional areas, including related services, components and applications, so that releases will be a fully transactional change operation across many systems and platforms.

We hope you enjoyed it. Let us know if you’d like to schedule a demo or talk to one of our engineers to learn more about how other organizations have implemented the mainframe into the overall delivery pipeline.

Other posts in this series:

Bringing DevOps to the Mainframe pt 1
Bringing DevOps to the Mainframe pt 2: Tooling
Bringing DevOps to the Mainframe pt 3: Source code versioned in z/OS

Running pipeline jobs to critical environments often requires a scheduled execution to take place. Clarive scheduled jobs run always in 3 main steps, called “Pre”, “Run” and “Post”.

Why run a pipeline in phases?

Most of the time the entire deployment job should not run all of its tasks at the scheduled time but as soon as the job is scheduled.

There are several phases or stages to every scheduled deployment, most of which can run as early as it’s scheduled. Tasks such as building, packaging, testing and even provisioning infrastructure can take place earlier if they do not impact on the productive environments.

When defining a pipeline, always think of what can be detected in earlier phases so that the evening deployment run will not fail on something that could have been checked previously.

Separating your pipeline into diferent stages

Separating your pipeline into different stages


Pipeline phases

Here are the different pipeline deployment phases you can run in Clarive.

Deployment preparation (PRE)

The deployment pipeline will take care of:

  • Creating the temporary deployment directory

  • Identifying the files changed to be deployed (BOM)

  • Checking out the files related to the changesets to be deployed from either code repositories (code) or directly attached to the changesets (SQL).

  • Renaming environment specific files (i.e web{PROD}.properties will be used just for deployments to the PROD environment)

  • Replacing variables inside the files (i.e. it will replace ${variable} with the real value of the variable configured for the project affected for the target environment)

  • Nature detection Clarive will identify the technologies to deploy analyzing the BOM

Common deployment operations

Common deployment operations


Deployment operations (RUN)

Actual deployment operations will be executed in this phase of the Clarive job (movement of binaries, restart of servers, check the proper installation, etc.)

Deployment operations

Deployment operations


Post-deployment operations (POST)

After the deployment is finished Clarive will take care of any task needed depending on the final status.

Some of the typical operations performed then are:

  • Notify users

  • Transition changesets states

  • Update repositories (i.e. tag Git repositories with an environment tag)

  • Synchronize external systems

  • Cleanup temporary directories locally and remotely

  • Synchronize related topics statuses (defects, releases, etc.)

Post deployment operations

Post deployment operations



Regardless of if you use Clarive or not, when defining your deployment pipelines always think on these three stages or phases.

  • PRE – everything that can run as soon as it has been scheduled
  • RUN – crucial changes that run at the scheduled time
  • POST – post activities or error recovery procedures

Happy DevOps!

Try Clarive now and start with your DevOps journey to continuous delivery.

Elixir logo

As you can see in our previous posts, having a complete pipeline to introduce DevOps in your day-to-day live is easy with Clarive’s Rulebook.

You just have to follow these three simple steps:
1. Get your 30-day trial
2. Upload your code to your Clarive Project repository.
3. Prepare your rulebook, push your commit and enjoy! (oops, maybe four steps would have been better :))

So let’s get down to business: we will detail the needed code.

Defining our variables

First, we declare the variables that will be used throughout our pipeline process.

  - workspace: "${project}/${repository}"
  - server:  https://<my-clarive>
  - art_path: ${server}/artifacts/repo/${project}

Building our application

In this step, we choose Elixir Docker image using Mix as a building tool.

    - image:
         name: 'elixir'
         runner: 'bash'

     - shell [Compile application]: |
         cd {{ workspace }}
         mix compile
         tar cvf ${project}_${job}.tar _build/dev/lib/

And publish the compiled application to our artifact repository.

    - artifact_repo = publish:
        repository: Public
        to: '${art_path}'
        from: '{{ workspace }}/${project}_${job}.tar'
    - log:
        level: info
        msg: Application build finished

Ready to test

As long as we have our own application tests, this step is as simple as running the right command.

    - image:
         name: 'elixir'
         runner: 'bash'

     - shell: |
         cd {{ workspace }}
         mix test

Deploy wherever we want

Now, it’s time to choose where our app will run. For example, send the tar file to another server and run the app.

   - ship:
       from: '${art_path}/${project}_${job}.tar'
       to: /tmp/remotepath/
       host: ${remote_server}
  - shell:
      cd /tmp/remotepath/
      tar -xvf ${project}_${job}.tar
      mix run

This remote_server could be an AWS instance in PROD environment or another Docker container just to QA.

Happy Ending

Now with our .yml file already prepared, we can use Clarive’s interface to visualize the steps that will follow the finalizing of the rulebook.
To start the deployment, we only need to perform the push on the repository, just as we have seen before in other posts. When performing the push, Clarive automatically creates a deployment (Continuous Integration) and executes all the code found in the .clarive.yml.

Visit Clarive documentation to learn more about the features of this tool.

Increase quality and security of node.js applications which rely on NPM packages.

Developers often create small building blocks of code that solve one particular problem and then “package” this code into a local library following NPM guidelines. A typical application, such as a website, often consists of dozens or hundreds such small node.js packages. Development teams often use these packages to compose larger custom solutions.

While NPM allow teams to exploit the expertise of people who have focused on a particular problem area, residing either inside or outside the local organization, and support teams work together better, sharing talent across projects, we often see companies struggling with the quality of packages being used and as a result looking for ways to control usage better.

Better ways to manage and control what packages are being deployed in their cloud and/or their data centers is vital.

Organizations want to reduce the risk of failure or instability resulting from downloads of the latest version of a required NPM package from the internet, potentially improperly tested.

This video shows how Clarive can help you, making your node.js applications that use NPM packages more secure and stable.

Start creating NPM packages with Clarive. Get your 30-day trial now.

In this video we can see how Clarive checks the availability of certain versions of applications (BankingWeb 2.1.0 and ComplexApp 1.2.0) needed in an environment before deploying a version of another application (ClientApp 1.1.0).

We could consider the applications(BankingWeb and ComplexApp) as pre-requisites for the actual application (ClientApp) for which the deployment is requested.

When the required applications versions (BankingWeb 2.1.0 and ComplexApp 1.2.0) are not in the target environment yet, Clarive will block the deployment until the applications are either deployed first (in a separate job) or added to the same deployment job as the application requiring the pre-requisite applications.

Start your DevOps journey to continuous delivery with Clarive now. Ge your 30-day trial here.

Given the fact that the EU’s GDPR law (replacing the 1995 Data Protection Directive) will go into effect on May 25, almost every software vendor is jumping on the bandwagon to explain how they can help.

Let me be compliant, and follow the stream.

gdpr and Clarive

When reading about the subject online, I noticed that most articles center around the specific business and legal obligations regarding personal data. These articles focus on physical data processing and the data controller obligations to manage processing. Of course!

This is not what I want to write about in this blog however. The GDPR is also expected to impact the software delivery life cycle and its related IT-development processes for organizations that plan to rollout IT projects within the EU.

There are many lifecycle flavors to deliver software today on the market, all waterfall, iterative, or agile based. All of them define the way to manage and control the IT project, from planning to rollout, across the different application layers or modules, and platforms. Common software layers that will be directly impacted by the new GDPR law include of course databases as well as their related architecture, but also data transport, data security, presentation, and application layers… basically potentially every software aspect could be affected!

The impact of GDPR on application delivery

This means if your company intends to continue to roll out systems in the EU, you will have to deal with the new functional and technical requirements introduced by the GDPR like the following (this is not an exhaustive list, only some important ones to make the point):

  • Ensure data protection in the system and the organization, by design and by default (Recital 78 and Article 25)
  • Use data encryption when possible (Recitals 83 and Articles 6-4(e), 32-1(a))
  • Use Data pseudonymization when possible (Recitals 26, 28, 29, 78 and Articles 6-4(e), 25-1, 32-1(a))
  • Anonymize data when possible (Recital 26)
  • Share processing attributes and steps to the data subject in an easy to understand form at the time of data collection, electronically or in writing (Recitals 39, 58 and Articles 12-1, 13-2(a-f))
  • Make data portable to another provider (maybe competitor) (Recital 68 and Articles 13-2(b), 14-2(c), 20)
  • Ensure data is secured, and integrity and confidentiality are maintained, using technical and organizational means under the management of the controller (Recital 49 and Articles 5-1(f), 32-1(b-d))

While a number of these new requirements might be seen as “no-brainer” as they were already part of your software design, others will trigger new requirements that need to be implemented fast and with quality before the law is enforced.

Failing seems to be not really an option. Not complying with the GDPR requirements could result in very serious penalties! As I could read, the worst-case scenario could a fine of €20 million or 4 percent of the company’s previous year’s total global revenue, whichever is greater. Ouch!

The clock is ticking, how do you track progress and ensure compliance?

With only a few more months left, how are you progressing with the delivery of these new requirements? Can you truly track requirement progress throughout your software delivery chain? How confident are you all policies are correctly implemented?

When speaking to bigger clients, this is often their biggest challenge: They have deployed multiple tools to support software delivery. Coding is fragmented, and the delivery toolchain is often poorly integrated, leading to extensive manual activities within the delivery process and a lack to end-to-end visibility and traceability.

At Clarive we believe in SIMPLICITY. Your software delivery toolchain should be as simple as possible, requiring the minimal set of tools to get the work done fast, with quality, and in a transparent way. For smaller organizations and startups, this can be a single tool: Clarive! Bigger organizations often do need multiple tools to support multiple platforms, but they miss overall orchestration and automation. Not those that use Clarive!

As a simple, lean application delivery platform, Clarive will deliver you the traceability you need to track progress on your GDPR (and other) requirements with ease.

Clarive not only helps with end-to-end tracking, its powerful role-based, ruling and workflow automation system also offers capabilities that will help you to ensure everyone on the team remains in compliance with the company’s legal and other requirements, like those for GDPR. For example:

  • Workflow rules: Workflow rules allows you to accept/reject code or actions that do not comply with company policies. For example, our support for code reviews ranges from static code analysis-based decision tree rules to multi-level acceptance approvals within the delivery process.
  • Role based security: Permissions can be set very granularly according to the role members have in respect to the project.
  • Cross platform & process Automation: The best way to ensure compliance it to AVOID manual interventions. Clarive allows you to automate every delivery execution step (apart from the coding itself of course) and process workflow. We support this across teams and platforms, making manual activities (other than just approvals) redundant.

Sounds great? Why don’t you take a look at Clarive now? As our customers witness, you can get started quickly. Just download your 30-day trial here and try it out yourself.

Today we’re going to see how to deploy an application in Google Play Store with Clarive.

In the following post we’ll see how to automate the compilation of our applications, as well as making its subsequent upload to the Play Store from a mobile app completely automatic. Clarive will be the only tool we’ll use throughout the whole process.
All of this will save you costs, as you will avoid the costs of manually carrying out the compilation and deployment each time a new version of an application is launched.

To develop this, we install a 30-day trial Clarive instance. Through the use of Docker containers we will be able to compile and deploy an Android application in Google Play Store by using the rulebook that we will explain below.
The whole process will be managed through what we have configured in the file .clarive.yml that is in the root folder of our repository.


In order to complete this process there are some requtirements, which are as follows:
– The Play Store .json file in order to be able to upload the application automatically.
– The application ready to be compiled and signed automatically.
– A Clarive instance installed, which you can get here

Designing our .clarive.yml

The .clarive.yml file will be where we will define the steps that should be followed for the compilation and deployment process.

Defining our variables

First we will declare the variables that will be used throughout our pipeline process.

  artifact_path: http:///artifacts/repo/
  # Root path to our artifact repository.

  artifact_repo: "public"
  # Name of repository

  artifacts_store_path: "android/app/app-release-{{ ctx.job('change_version') }}.apk"
  # Path inside the repository where the generated APK will be stored

  json_file: "clarive-rulebook.json"
  # Name of our JSON file for its uploading to Play Store

  workspace: "{{ ctx.job('project') }}/{{ ctx.job('repository') }}"
  # Workspace where our development files will be stored

  package_name: codepath.apps.demointroandroid2clarive
  # Pack name for our app in the Play Store

Building our application

Next is the BUILD phase, and here we are going to compile the application and save the generated file in our artifact repository.

Our build.gradle file must be prepared for the automatic digital signing of the application and and its subsequent uploading to the Play Store. In the same way, we must have also previously manually uploaded a first version of the application to the Play Store.

An image with gradle and the SDK source of Android should be enough to make the compilation.

After specifying the Docker image that we will use, we will execute the gradle command within our working directory, so that the compilation and signature of the application can be carried out.

In our particular case, we should use the root user and the sh shell of the image we are using in order to perform the compilation of the application.

    - image:
         name: 'knsit/gradle-android'
         user: 'root'
         runner: 'sh'

     - shell [Compile application]: |
         cd {{ workspace }}/Application_code/app/
         gradle AssembleRelease

To complete the BUILD phase, in which we have compiled the application, we need to save the generated APK file, where the compiled application is located, in our artifact repository:

    - artifact_repo = publish [Store APK in artifacts repository]:
        repository: Public
        to: '${artifacts_store_path}'
        from: '{{ workspace }}/Application_code/app/build/outputs/apk/app-release.apk'
    - log:
        level: info
        msg: Application build finished

Once the file has been saved and stored we will have terminated our BUILD phase, and we will now move on to the DEPLOY phase in order to carry out the deployment of the file.

Deploying to Play Store

In this phase, we need our APK file with the compiled application, and our .json Play Store authentication file in order to carry out the upload automatically.

In this case, the image that we need to make the deployment will be a Docker image using the installed Fastlane, and here we are going to prepare the command we need to execute in order to deploy our application in Play Store.
In this case we will be uploading it to the Alpha phase of the applications.

Our .json file is located within our development files where the application is, which means we can place it within our workspace.

    - image:
        name: 'levibostian/fastlane'

    - shell [Upload application with Fastlane]: |
        fastlane supply --apk .artifacts/{{ artifact_repo }}/{{ artifacts_store_path }} -p {{ package_name }} --json_key {{ workspace }}/{{ json_file }} -a alpha

    - log:
        level: info
        msg: The app has been deployed to the Play Store

In this way, if we carry out the push to our repository, Clarive will automatically run a deployment (CI) and our application will be uploaded onto our Play Store.


Finally, in the POST step we will email the user that has launched the deployment to inform them that it has been completed, and in order for them to check the results.

    - log:
        level: info
        msg: Deploy finished
    - email:
        body: |
          Hello {{ctx.job("user") }},
          <b>{{ ctx.job('name') }}</b> has finished and your app has been deployed. Check it out in <a href= "">your Play Console</a>.

          Also your apk file has been stored in your artifacts repository:
              ${ artifact_path }${artifact_repo}/${artifacts_store_path}.

        subject: Application deployed to Play Store
           - ${ctx.job("user")}

To conclude

Now with our .yml file already prepared, we can use Clarive’s interface to visualize the steps that will follow the finalizing of the rulebook.
To start the deployment, we only need to perform the push on the repository, just as we have seen before in other posts. When performing the push, Clarive automatically creates a deployment (Continuous Integration) and executes all the code found in the .yml.

If everything has run correctly, we should be able to see on the monitor how our deployment is being executed.

Job succesfully finished

By doing all of this, we have carried out the whole process of compiling and uploading our application to Google Play Store through the use of Clarive’s rulebooks and different Docker containers.
If we look at our page on the Play Store console, we’ll be able to see a message indicating that we have an application ready and waiting.

APK v2 deployed

APK details

This is a brief example that can serve as a reference. You can configure different environments to deploy on. You can also change the type of operation to be carried out in each phase in a completely customizable way, in such a way that it adjusts to what each person needs for their development work and deployments.

Visit our documentation to learn more about the features of Clarive.

In previous videos we have deployed Mainframe Git managed application, Java application and Mainframe Endevor packaged up till the QA environment.

All of the deployments for each of the technologies and target platforms were done with Jobs created by the same versioned pipeline rule. Now that we have reached the final deployment to the production environment, we don’t want to deploy individual changesets anymore. We want to group them into a release/sprint and deploy them together.

We will do that in a single job created by the same pipeline as the one used for the changesets.
In summary, with Clarive we deploy in a single job from a single pipeline, multiple technologies to multiple environments on multiple platforms. This assures us a consistent way of deploying.

Visit our documentation to learn more about the features of Clarive.