Source Code Maturity levels

Have you jumped into DevOps wagon already? You probably have. But perhaps you still not sure if you are lacking a certain tool in your toolbox if you are working currently with DevOps.

Or maybe your organization or team is starting to plan to fully embrace DevOps and your team is researching what is exactly what to need to install in order to have the perfect toolchain. Perhaps you have a gap in some processes that you are not even aware of. Establishing a good and solid DevOps toolchain will help determine ahead of time the grade of the success of your DevOps practices.

In this blog post, we will be exposing maturity level checklists for different DevOps areas so you have an idea where you at in terms of Continuous Delivery.

We will review the maturity levels from the following DevOps aspects:

  • Source code management
  • Build automation
  • Testing
  • Managing database changes
  • Release management
  • Orchestration
  • Deployment and provisioning
  • Governance, with insights

Source code management tool

Commonly known as repositories. It works as a version control and can be used to keep track of changes in any set of files. As a distributed revision control system it is aimed at speed, data integrity, and support for distributed, non-linear workflows.

This is the maturity level checklist. we go from a none or low maturity level to a high maturity state:

  • No version control
  • Basic version control
  • Source/library dependency management
  • Topic branches flow
  • Sprint/project to branch traceability

Source Code Maturity levels

Build automation tool

Continuous Integration (CI) is a software development practice that aims for a frequent integration of individual pieces of work. Commonly each person integrates at least once per day giving place to several integrations during the day. Each integration should be verified by an automated Build Verification Test (BVT). These automated tests can detect errors just in time so they can be fixed before they create more problems in the future. This helps to reduce a lot of integration issues since this practice allows to develop faster and in a more efficient way.
This is the automation maturity checklist to see how you are doing in your CI:

  • No build automation. Built by hand. Binary check-in.
  • Build automated by central system
  • Reusable build across apps/projects
  • Continuous/nightly builds
  • Feedback loop for builds

Automation Maturity Levels

Testing framework

Testing automatization can be in code, systems, service etc. This will allow the testing each modification made in order to guarantee a good QA. Even the daily or weekly release of code will produce a report that will be sent every early morning. To accomplish this you can install the Selenium app in Clarive.

This checklist will help to determine your testing practices level:

  • No tests
  • Manual tests
  • Automated unit/integration tests
  • Automated interface tests
  • Automated and/or coordinated acceptance tests
  • Test metrics, measurements, and insights
  • Continuous feedback loop and low test failure

Testing Maturity Levels

Database Change Management

It’s important to make sure database changes be taken into consideration when releasing to production. Otherwise, your release team will be working late at night trying to finish up a release with manual steps that are error-prone and nearly impossible to rollback.

Check what is your team’s database management current state:

  • Manual data/schema migrations
  • Automated un-versioned data/schema migrations
  • Versioned data/schema migrations
  • Rollback-enabled data/schema migrations

Database matutity levels

Since databases schema changes are sometimes delicate, make sure to include your DBA team into the peer review process, so that changes are 1) code; 2) can be merged and patched; 3) can be code reviewed.

Release Management and Orchestration

You can fully orchestrate tools that are involved in the process and manage your release milestones and stakeholders with Clarive.

Imagine that a developer makes a change in the code after this happens you need to promote the code to the integration environments, send notifications to your team members and run the testing plan.

Are you fully orchestrating your tools? Find out with this checklist:

  • Infrequent releases, releases need manual review and coordination
  • Releases are partially automated but require manual intervention
  • Frequent releases, with defined manual and automated orchestration and calendaring
  • Just-in-time or On-demand releases, every change is deployed to production

Orchestration Maturity Levels

Deployment tool

Deploying is the core of how you release your application changes.

How is your team deploying?:

  • Manual deployment
  • Deployment with scripts
  • Automated deployment server or tool
  • Automated deployment and rollback
  • Continuous deployment with canary, blue-green and feature-enabling technology

Deployment Maturity Levels

Provisioning

As part of deployment, you should also review your provisioning tasks and requirements. Remember that it’s important to provision the application infrastructure for all required environments, keep environment configuration in check and dispose of any intermediate environments in the process.

Yes, provision has also several maturity levels:

  • You provision environments by hand
  • Environment configuration with scripts as part of deployment
  • Provisioning of disposable environments with every deployment
  • Full provisioning, disposing and infrastructure configuration as part of deployment
  • Full tracking of environment-application dependencies and cost management

Provisioning Maturity Levels

We have come a long way doing this with IaC (Infrastructure as Code). Nowadays a lot can be accomplished with less pain using technologies such as containers and serverless, but you still need to coordinate all cloud (private and public) and related dependencies, such as container orchestrators.

In your path to provision automation and hands-free infrastructure, make sure you have a clear (and traceable) path to the Ops part of your DevOps team or organization, making sure to avoid bottlenecks when infrastructure just needs a magic touch of the hand. One way of accomplishing that is to have a separate stream or category of issues assigned to the DevOps teams in charge of infrastructure provisioning. We’ll cover that on a later blog post.

With the right reports, you’d be amazed by how many times releases get stuck in infrastructure provisioning hell…

Governance

Clarive has also productivity and management tools such as with Kaban swimlanes, planning, reports and dashboards that give managers tools to identify problems and teams a way to quickly check overall performance of the full end-to-end process.

Here are the key points to make sure you evolve the overall governance of your DevOps process:

  • There is no end-to-end relationship between request (why) and release (when, how, what)
  • Basic Dev-to-Ops traceability, with velocity and release feedback
  • Full traceability from request to deployment
  • Immediate feedback and triggers

Maturity Levels of Source Code Management

There you go, let’s devops like the grownups do

In this post, we have exposed the main Continuous Delivery aspects that every DevOps team should be looking forward to improve and their respective readiness levels. So go with your team and start planning a good DevOps adoption plan 😉


Schedule a demo with one of our specialists and start improving your devops practices.


In this last installment of this series, we will review how Clarive can replace z/OS SCM tools such as CA Endeavor or Serena ChangeMan with a global DevOps pipeline that can drive unified deployments across all platforms.

Source code versioned and deployed by Clarive

Clarive can deploy source code managed outside the mainframe.

Selecting elements to deploy

In this article, z/OS artifacts (programs, copy books, JCLs, SQLs, etc.) can be versioned in Clarive’s Git, but it could be done with any other VCS for that matter. The developer will select the versions of elements to deploy from the repository view attaching it to the Clarive changeset.

Versions associated to changesets

Versions associated to changesets

 

Preparing mainframe elements

Clarive will checkout the selected version of the source code to deploy in the PRE step of the deployment job and will perform the needed activities to check the code quality (i.e. execute static code analysis, check vulnerabilities, etc.) and to identify the type of compilation to be executed (i.e. decide the type of item depending on the naming convention, parse the source code to decide if DB2 precompilation is needed, etc.).

Depending on the elements to deploy, different actions will be executed:

  • Copy books, JCLs and all other elements that don’t need compilation will be shipped to the destination PDSs

  • Programs will be precompiled and compiled as needed and the binaries will be kept in temporary load PDSs

Clarive rule will decide what JCL template will be used to prepare/deploy each type of element and will submit the JCL after replacing the variables with their actual values depending on the deployment project and environment.

Different z/OS element natures

Different z/OS element natures

 

Deploying elements

Depending on the elements to deploy, different actions will be executed:

  • Programs will be shipped to the destination PDSs and binded as needed.

A Clarive rule will decide what JCL template will be used to deploy each type of element and will submit the JCL after replacing the variables with their actual values depending on the deployment project and environment.

Deploy and bind example

Deploy and bind examples

As usual, Clarive will keep track of any nested JCL jobs that may run associated with the parent JCL.

Rollback

Clarive will start a rollback job whenever an error condition occurs in the rule execution. It will automatically check out and deploy the previous version of the elements available in the source repository.

Conclusion and the next steps

In this DevOps for the Mainframe series, we have exposed the key features of Clarive for bringing mainframe technologies into the full, enterprise-wide continuous delivery DevOps pipeline.

Once an organization has decided to modernize mainframe application delivery, there is a set of recommended steps:

Establish Prerequisites

The first step IT leaders need to take before modernizing mainframe application delivery is to evaluate whether the correct prerequisites are in place or in progress. To successfully implement a mainframe application delivery tool like Clarive requires either an existing process or the will to implement one.

Assess Operational Readiness

Many organizations discover too late that they have underestimated— sometimes dramatically—the investment needed in people, processes, and technology to move from their current environment for modernizing mainframe application delivery. The early readiness assessment is essential to crafting a transition plan that minimizes risk and provides cross-organizational visibility and coordination for the organization’s cloud initiatives. Many organizations already have some sort of mainframe delivery tools in place.

When key processes have been defined within such a framework, optimizing and transforming them to an enterprise-wide delivery is significantly easier, but still need to be integrated into a single Dev to Ops pipeline, as mainframe delivery requests typically tend to run outside the reach of release composition and execution.

Prepare the IT Organization for Change

This concludes our blog series on deploying to the mainframe.

IT leaders should test the waters to see how ready their own organization is for the change the way the mainframe application delivery processes fit into the picture. IT managers must communicate clearly to staff the rationale for the change and provide visibility into the impact on individual job responsibilities. It is particularly important that managers discuss any planned reallocation of staff based on reductions in troubleshooting time to alleviate fears of staff reductions.

Mainframe aspects

In this series we reviewed many different aspects for fully bringing your mainframe system up to speed with your enterprise DevOps strategy:

  • Define the critical capabilities and tooling requirements to automate your mainframe delivery pipeline.

  • Decide where your code will reside and who (Clarive or a mainframe tool) will drive the pipeline build and deploy steps.

  • Integrate the pipeline with other functional areas, including related services, components and applications, so that releases will be a fully transactional change operation across many systems and platforms.

We hope you enjoyed it. Let us know if you’d like to schedule a demo or talk to one of our engineers to learn more about how other organizations have implemented the mainframe into the overall delivery pipeline.


Other posts in this series:

Bringing DevOps to the Mainframe pt 1
Bringing DevOps to the Mainframe pt 2: Tooling
Bringing DevOps to the Mainframe pt 3: Source code versioned in z/OS

Docker image management

The problem at hand

The situation with the DevOps toolchain is that it just has too many moving parts. And these moving parts have become a cumbersome part of delivering applications.

Have you stopped to think how many different tools are part of your pipeline? And how this is causing your delivery to slow-down?

These might be some of the problems you could be facing when setting up your continuous delivery pipeline:

  • Changes in the application require changes in the way it’s built/deployed

  • New components require new tools

  • Many build, test, and deploy tools have plenty of dependencies

The container bliss

Containers are basically lightweight kits that include pieces of software ready to run the tasks in your pipeline. When containers are used as part of the pipeline, they can include all the dependencies: code, runtime, system tools, system libraries, settings. With containers, your software will run the same pipeline no matter what is your environment. You can run the same container in development and staging environments without opening Pandora’s box.

Containers are the way to be consistent in your CI/CD and releasing/provisioning practices.

Other advantages of containers are:

  • Containers can be versioned

  • Containers can hold the most relevant DevOps infrastructure

  • Containers are cheap and fast to run

  • Ops can let Dev drive CI/CD safely (by giving Devs templatized containers)

Clarive and Docker: what a combo!

Docker is therefore a great companion to your DevOps stack. Docker containers allow your project and repository rulebooks to run pipelines alongside any necessary infrastructure without requiring additional software packages to be installed in the Clarive server. Clarive runs your DevOps pipelines within managed containers.

By using containers in Clarive you can:

  • Isolate your users from the server environment so that they cannot break anything.

  • Version your infrastructure packages, so that different versions of an app can run different versions of an image.

  • Simplify your DevOps stack by having most of your build-test-deploy continuous delivery workflows to run in one server (or more, if you have a cluster of Clarive servers), instead of having to install runners for every project everywhere.

Docker_inClarive

Clarive and Docker flowchart


Curating a library of DevOps containers

Using a registry is a good way of keeping a library of containers that target your continuous delivery automation. With Clarive you can maintain a copy of a local registry that is used exclusively for your DevOps automation.

Defining “natures”

Each repository in your project belongs or implement one or more natures. The nature of your code or artifacts define how they are going to be implemented. A nature is a set of automation and templates. These templates can use different Docker containers to run.

For example, your application may require Node + Python, so two natures. If you have these natures in templates they will be consistent and will help developers comply to a set of best practices on how to build, validate, lint, test and package new applications as they move to QA and live environments.

Running commands on other servers

Clarive uses Docker for running shell commands locally. That guarantees that rulebooks (in the projects .clarive.yml file) will not have access to the server(s) running your pipelines.

But you can still run shell commands in other servers and systems, such as Linux, Windows, various Unixes flavors and other legacy systems (including the mainframe!) using the host: option in the shell: command.

How do I use my own containers

If the container is not available on the Clarive server, the Clarive rulebook downloads the container from Docker Hub.

So, to use your own containers, you have two options:

  • Upload them to Docker Hub. Then use them from your rulebook. Clarive will download it on the first run.

  • Install it on your Clarive server. On the first run Clarive will build another version of your container based on Clarive’s default Dockerfile, called clarive/your container. You don’t need to prefix clarive/ into the name, that’s done for you automatically.

Docker containers in your pipeline

Manage all active Docker containers in your pipeline from within Clarive


Getting started today

Using containers is an important step in implementing a continuous delivery and continuous deployment process that is streamlined and avoids environment clutter.

Head over to our 30-day trial and let Clarive to run your DevOps automation in docker containers for better consistency and easy setup of your temporary environments.


Learn more about Clarive Docker admin interface with this blog post and learn how to manage containers and docker images.


Releases Across Pipeline

ARA is a key DevOps discipline that deals with managing the software and infrastructure changes brought about by a new release. In order to gain a clearer understanding of ARA, we could say that the development lifecycle involves two main areas:

  1. Component packaging release and environment planning

  2. Readiness and delivery pipeline automation

Efficiency in moving releases between environments is an important feature, which we can break down into the following seven subcategories.

1. Deployment Flexibility

Deployment needs to be flexible enough to accommodate options (across all platforms and for all environments) such as the following: mapping of dependencies, notifications, creation of shared components, handling of possible manual steps, dealing with concurrent releases, prioritization, window management and integration.

Clarive’s rule-driven deployment model approach, which is based on decision trees, allows the system to keep track of the relationship between all artifacts and their configuration, thanks to its graph database. Decision trees may include other releases.

Clarive integrated graph database keeps track of the relationship between all artifacts and their configuration. Rules support orchestration of both automated and of manual activities.

Clarive features a rule design palette containing the necessary controls for configuring the orchestration scenarios needed: pause job for manual tasks, add approval steps in the job pipeline, Clarive traps an error during the job pipeline rule execution and ask for manual input decisions from the Ops team or equivalent (retry, abort and rollback, skip and continue…)

2. Release Readiness

Release readiness features include deployment risk analytics, impact analysis, integration with testing tool results and automated triggering of quality gates based on test case results and pipeline execution governance.

Clarive provides support for release readiness, including impact analysis, deployment risk analytics, integration with testing tool results and automated triggering of quality gates based on test cases. It allows users to customize their experience using dashboards, rules, pipelines and custom reports.

Orchestration tracking readiness, which is used to track how close a release is to being production-ready is only possible in an end-to-end orchestration tool model.

3. Deployment Plan Management

There should feature a complete GUI for creation and management of role-based deployment plans, and the underlying logic of these should be displayed as transparently as possible.

Support for application deployment plan management that should include the following:

  • Control Logic – to enable unrestricted and flexible automation

  • Pipeline Diagram Support – graphical representation of logic

  • DevOps IDE Platform – an Integrated Development Environment for designing development plans

A decision tree structure/rule design, containing collapsible sections for each phase, is used in order to provide an overview of the end-to-end build, deployment, and post-deployment process. Should more sophisticated activities be required, nested deployment plans (reusing rules) can help promote readability and logic reuse.

4. Pipeline Governance

Pipeline governance includes support for root-cause analysis (RCA) and execution profiling, which serves as an aid when improving software performance.

RCA is a method used for problem solving by identifying root causes of faults or problems, and its implementation should be capable not only of tracing a failure to a specific version of a specific component, but also of incrementally patching the component, while highlighting any other components that require similar patching.


Clarive Root Cause Analysis Graph

Clarive Root Cause Analysis Graph

Deployment pipelines may be monitored through the built-in monitoring service, and individual steps can be traced and analyzed for execution failures, together with any details available from the OS or tool with which interfacing is performed.

Clarive comes with a root cause analysis (RCA) algorithm, whereby the tool contains an extensible database of root causes, and can match failures to said database.

5. Pipeline Recreation

Pipeline recreation refers to the ability to rerun a deployment using a previous version of an environment model and/or rule logic, and should also be a supported feature of an ARA application.

Clarive can determine which versions of which components were used during the delivery process by capitalizing on our software’s configuration management capabilities. The system can use the exact version of the delivery rules that was used for the deployment at the requested time.

6. Deployment Scenarios

Application Release Automation tools should offer flexibility in their support for full-featured deployment styles, thereby ensuring extensibility.

Some typical deployment scenarios one should take into consideration:

  • Canary deployments

  • Blue/Green deployments

  • Rolling or phased deployments

  • Hot deployments

  • Dark launch modes

Clarive supports any deployment scenario/strategy, including partial deployments, rolling and phased, as well as canary and hot, blue, green, and dark launch modes.

7. Test Data Management

Test integration is an important feature, therefore automation of test data preparation and integration with test management suites need to be supported.

Clarive release models can include test case and test data selection, with the ability to select the correct test strategy according to the type of release and release contents, as well as control which test data and test sets are to be deployed into an environment, and to scheduling when said deployment is to take place.

It is also capable of orchestration and working with any virtualization, automated testing tool and test data management tool featuring a CLI, REST or SOAP interface.

How Clarive Lives Up to Moving Releases

Clarive greatest strength is moving releases between environments while offering enhanced deployment flexibility, tracking release readiness and management of the release pipeline.

clarive changeset timeline

Clarive tracks every stage of releasing a change

There also features pipeline governance and recreation coverage, with built-in support for rerunning deployments using an earlier version of a given environment model.

In short

Application Release Automation has a very defined set of criteria for release pipeline management. Clarive matches every one of the standard criteria, and indeed surpasses them if we take into account extra features such as Kanban boards, advanced error control and a complete DevOps IDE platform for creating end-to-end automation, orchestration and collaboration of all the teams and assets involved in the organization of DevOps initiatives.


Want to know more about ARA? Check our presentation: Why ARA Matters and learn how to to take control over complexity to deliver a great workflow.


In this installment of this series we will review how Clarive drive popular z/OS SCM tools such as CA Endevor or Serena ChangeMan as part of your global DevOps pipeline.

Source code versioned in z/OS (Endevor or Changeman)

In this scenario, the source code to be deployed resides in the mainframe.

Selection of objects to deploy

Clarive will allow the development user to attach z/OS packages (Endevor orChangeMan) to the changesets for further processing.

z/OS packages

z/OS packages

 

z/OS repository

z/OS repository (Changeman ZMF)

 

Preparing Payload Elements

Clarive rules will define the logic to prepare the z/OS packages by executing either online or batch operations that are needed to prepare them (freeze,inspect, generate, execute, etc.)

Inspect package operation

Inspect package operation (Endevor)

 

Deployment of the elements

The deployment of the packages will be executed by Clarive by submitting the specific JCL to ship/promote/approve the packages included in the job changesets.

Ship package

Ship package (Endevor)

 

Endevor Ship package

Endevor Ship package sample output

 

Rollback

Clarive rule will allow the administrator to define the way to rollback changes. In either Endevor and ChangeMan ZMF it will execute a Backout operation on each package included in the job.

Backout package

Backout package (Endevor)

 

Conclusion

Endevor and Changeman are powerful version control tools used in mainframe environments. Fundamentally, they have similar or equivalent workflows for the software configuration lifecycle but often these tools are used independently of the overall enterprise DevOps pipeline. DevOps implementations often leave them out, however they remain critical to deliver and run the critical areas of the business with much newer technologies.

With its mainframe orchestration capabilities, Clarive enables organizations with either tool to build better integrated pipelines that brings mainframe changes into the main Dev to Ops stream.

Stay tuned for the forth and final installment of these series!


Read the previous post of this series and learn about the mainframe features you can find in Clarive and how they are integrated into the system.


Mainframe Tooling
 

In this second part of this blog series we will detail the mainframe features you can find in Clarive and how they are integrated into the system.

Clarive Mainframe Features

Clarive manages all aspects of the z/OS code lifecycle:

  • Sending files to z/OS partitions
  • Character translation maps and codepages
  • Identify relationships – impact analysis
  • JCL Template Management
  • Submit JCL
  • Nested JCL Management and synchronous – asynchronous queue control
  • Retrieve Job Spool output and parse results

Integration rules

Clarive z/OS features 3 entirely different integration points with the mainframe. Each integration feature serves a specific purpose

  • Job queue access – to ship files and submit jobs into the z/OS job queue in batch mode. Clarive will track all nested jobs and parse results into the job tree.

  • ClaX Agent – for delivering files into Datasets and/or OMVS partitions and executing z/OS processes online. This is the preferred way of running REXX scripts sent from Clarive to the mainframe. Access z/OS facilities such as SDSF®,ISPF®, VSAM® data records or RACF®.

  • Webservices Library – for writing code that initiates calls from the mainframe directly into Clarive using TCP/IP sockets and the RESTful webservices features of the Clarive rules.

Clarive to Mainframe Integration Point

Clarive to Mainframe Integration Point

Tool considerations

Clarive is a tool that allows enterprise companies to implement an end-to-end solution to control the software lifecycle providing countless out-of-the-box functionalities that help solving any complex situation (automation, integration with external tools, critical regions, manual steps through the process, collaboration, etc.)

CCMDB – Configuration Items

In Clarive any entity that is part of the physical infrastructure or the logical lifecycle is represented as a configuration item (CI). Servers, projects/applications, sources repositories, databases, users, lifecycle states, etc. are represented as CIs in Clarive under the name Resource.

Any resource can have multiple relationships with other resources (i.e. an application is installed in one server in production, a user is developer of an application, the Endevor “system x/subsystem y” combination is the source code repository related to an application, etc.)

The graph database made of this entities and relationships is Clarive’s Change oriented Configuration Management Database (CCMDB). The CCMBD is used to keep the whole system configuration as well as to do impact analysis, infrastructure requests management, etc.

CCMDB

CCMBD navigation

 

Natures / Technologies

Clarive natures are special CIs in Clarive that automate the identification of technologies to be deployed by a deployment job. A nature can be detected by
file path/name (i.e. Nature SQL: *.sql), by project variable values (i.e. ${weblogic}:Y) or by parsing the changed files code (i.e. COBOL/DB2: qr/EXEC SQL/)

Natures list

Natures list

 

JES spool monitoring

Clarive will take care of downloading and parsing the spool output when submitting a JCL in z/OS to split the DDs available in the output and to identify and use the return codes of all steps executed in the JCL.

JES output viewer

JES output viewer

 

Calendaring / Deployment scheduling – Calendar slots

Any deployment job in Clarive will be scheduled and Clarive will provide the available slots depending on the infrastructure affected by the deployment. Infrastructure administrators can define these slots related to any CCMDB level (environment, project, project groups, server, etc…)

Calendar slots definition

Calendar slots definition

 

Rollback

Clarive rule will allow the administrator to define the way to rollback changes. In either Endevor and ChangeMan ZMF it will execute a Backout operation on each package included in the job.

Rollback control

Rollback control

 

Next Steps

Features are an important step when picking the right tool to bring DevOps to the mainframe.

In the next 2 installments of this series we will review how Clarive can deploy mainframe artifacts (or elements), either by driving popular z/OS SCM tool such as CA Endevor or Serena ChangeMan, or replacing them with Clarive’s own z/OS deployment agents.


Read the first post of this series and learn more on how to bring DevOps to the mainframe.



Clarive Test

A test plan is an essential part of software development. It is a must if you want to get devs and ops people on the same page. As a guide and a workflow, it reinforces your projects success detecting potential flaws in advance. Tracking is also an important part of the testing process, as changes are applied, the test plan should be updated.

Unit test, code QA, performance test, integration and regression test are some of the most common types of software testing, and in some cases they are even compulsory to apply as a previous step to a production deployment.

With Clarive you can create a QA workflow that combines manual and automated steps, creating test plans automatically on a pull-request/merge-request, which in Clarive can actually be any changeset (user stories, features, bugfixes, etc.). Test plans can then be used to automate test validation.

Like in all software development processes, it’s necessary that each code revision go through a proper testing process to ensure the quality of the product.

Automating test plan creation

You can also create test plans automatically along with the included changes in the version to be released. This feature is capable of detecting the modified files by the developer, create a plan with the test cases (automated and manual tests) that have impact over the modified/created functions and link them to the release. Clarive will only add the test cases that are directly affected by one of the functionalities modified in the release.

This way Clarive creates a test plan suitable for each release and if the user wishes to, automate the test cases and depending on the results execute the required actions.


Download our whitepaper: The Value of DevOps for Test & Quality Managers and learn more about how to minimize the risk of product failure with Clarive.


Elixir logo

The DevOps movement in general, tends to exclude any technologies that are outliers to the do-it-yourself spirit of DevOps. This is due to the nature of how certain technologies are closed to developer-driven improvements, or roles are irreversibly inaccessible to outsiders.

That’s not the case in the mainframe. The mainframe is armed with countless development tools and programmable resources that rarely failed to enable Dev to Ops processes.

Then why DevOps practices have not prospered in the mainframe?

  • Ops are already masters of any productive or pre-productive environments – so changing the way developer teams interact with those environments require more politics than technology and are vetted by security practices already in place.
  • New tools don’t target the mainframe – the market and open source communities have focused first on servicing Linux, Windows, mobile and cloud environments.
  • Resistance to change – even if there were new tools and devs could improve processes themselves, management feels that trying out new approaches, especially those that go “outside the box”, could end up putting these environments, and mission-critical releases at risk.

Organizations want to profit from DevOps initiatives that are improving the speed and quality of application delivery in the enterprise at a vertiginous pace. But how can they leverage processes that are already in place with the faster and combined pipelines setup in the open side of the house?

Enter Clarive for z/OS

Our clients have been introducing DevOps practices to the mainframe for many years now. This has been made possible thanks to the well-known benefits of accepting and promoting the bimodal enterprise.

There are two approaches that can be used simultaneously in accomplishing:

  • Orchestrate mainframe tools and processes already in place – driving and being driven by the organization’s delivery pipeline
  • Launch modernization initiatives that change the way Dev and Ops deliver changes in the mainframe

Business Benefits bringing DevOps to the Mainframe

The benefit is simple. Code that runs in the mainframe is expensive and obscure. By unearthing practices and activities, organizations gain valuable insight that can help transform the z/OS-dependent footprint into a more contained and flexible part of the pipeline with these key benefits:

Coordinate and Speed-up Application Delivery

Mainframe systems don’t run in isolation. The data it manages and the logic it implements are shared as a single entity throughout the enterprise by applications in the open, cloud and even mobile part of the organization. Making changes that disrupt different parts of this delicate but business-critical organism needs to be coordinated at many phases, from testing to UATs to production delivery. Delivering change as a single transactional pipeline has to be a coordinated effort both forward and backwards.

End-to-End Visibility

DevOps practices perceive the mainframe as a closed box that does not play well with activities that target better visibility and end-to-end transparency. Having dashboards and reports that can work as input and output from the mainframe release processes into other pipelines will help deliver change.

Run a Leaner Operation and Avoid Waste

Creating mainframe processes that are part of the bigger picture help determine where constraints may lay and what parts of the pipeline may be deemed obsolete or become bottlenecks.

Lower Release Costs

Mainframe tools are expensive and difficult to manage. MIPS and processing in the mainframe may be capped and new processes could create unwanted expenses. Relying more on tools that drive the mainframe from Linux may in return translate into significant per release cost savings, encouraging a more continuous release process.

Use Cases

The following is a list of the most relevant benefits of Clarive z/OS and popular use cases that our clients have implemented using the Clarive z/OS platform and tools:

  • Compile and link programs using JCL preprocessed templates. Deploy DB2 items directly to the database.
  • Compile related COBOL programs when Copybooks change
  • Total control what is deployed to each environment at a given time
  • Schedule jobs according to individualized release and availability calendars and windows
  • Request approval for critical deployment windows or sensitive applications or items
  • Keep the lifecycle in sync with external project and issue management applications
  • Run SQA on the changes promoted. Block deployment if a minimum score has not been reached
  • Reliably rollback changes in Production, replacing previous PDS libraries with the correct ones
  • Provision CICS resources on request by users

Stay tunned for more of these DevOps for mainframe blog series!


Try Clarive now and start bringing DevOps to the mainframe now.