In this last installment of this series, we will review how Clarive can replace z/OS SCM tools such as CA Endeavor or Serena ChangeMan with a global DevOps pipeline that can drive unified deployments across all platforms.

Source code versioned and deployed by Clarive

Clarive can deploy source code managed outside the mainframe.

Selecting elements to deploy

In this article, z/OS artifacts (programs, copy books, JCLs, SQLs, etc.) can be versioned in Clarive’s Git, but it could be done with any other VCS for that matter. The developer will select the versions of elements to deploy from the repository view attaching it to the Clarive changeset.

Versions associated to changesets

Versions associated to changesets

 

Preparing mainframe elements

Clarive will checkout the selected version of the source code to deploy in the PRE step of the deployment job and will perform the needed activities to check the code quality (i.e. execute static code analysis, check vulnerabilities, etc.) and to identify the type of compilation to be executed (i.e. decide the type of item depending on the naming convention, parse the source code to decide if DB2 precompilation is needed, etc.).

Depending on the elements to deploy, different actions will be executed:

  • Copy books, JCLs and all other elements that don’t need compilation will be shipped to the destination PDSs

  • Programs will be precompiled and compiled as needed and the binaries will be kept in temporary load PDSs

Clarive rule will decide what JCL template will be used to prepare/deploy each type of element and will submit the JCL after replacing the variables with their actual values depending on the deployment project and environment.

Different z/OS element natures

Different z/OS element natures

 

Deploying elements

Depending on the elements to deploy, different actions will be executed:

  • Programs will be shipped to the destination PDSs and binded as needed.

A Clarive rule will decide what JCL template will be used to deploy each type of element and will submit the JCL after replacing the variables with their actual values depending on the deployment project and environment.

Deploy and bind example

Deploy and bind examples

As usual, Clarive will keep track of any nested JCL jobs that may run associated with the parent JCL.

Rollback

Clarive will start a rollback job whenever an error condition occurs in the rule execution. It will automatically check out and deploy the previous version of the elements available in the source repository.

Conclusion and the next steps

In this DevOps for the Mainframe series, we have exposed the key features of Clarive for bringing mainframe technologies into the full, enterprise-wide continuous delivery DevOps pipeline.

Once an organization has decided to modernize mainframe application delivery, there is a set of recommended steps:

Establish Prerequisites

The first step IT leaders need to take before modernizing mainframe application delivery is to evaluate whether the correct prerequisites are in place or in progress. To successfully implement a mainframe application delivery tool like Clarive requires either an existing process or the will to implement one.

Assess Operational Readiness

Many organizations discover too late that they have underestimated— sometimes dramatically—the investment needed in people, processes, and technology to move from their current environment for modernizing mainframe application delivery. The early readiness assessment is essential to crafting a transition plan that minimizes risk and provides cross-organizational visibility and coordination for the organization’s cloud initiatives. Many organizations already have some sort of mainframe delivery tools in place.

When key processes have been defined within such a framework, optimizing and transforming them to an enterprise-wide delivery is significantly easier, but still need to be integrated into a single Dev to Ops pipeline, as mainframe delivery requests typically tend to run outside the reach of release composition and execution.

Prepare the IT Organization for Change

This concludes our blog series on deploying to the mainframe.

IT leaders should test the waters to see how ready their own organization is for the change the way the mainframe application delivery processes fit into the picture. IT managers must communicate clearly to staff the rationale for the change and provide visibility into the impact on individual job responsibilities. It is particularly important that managers discuss any planned reallocation of staff based on reductions in troubleshooting time to alleviate fears of staff reductions.

Mainframe aspects

In this series we reviewed many different aspects for fully bringing your mainframe system up to speed with your enterprise DevOps strategy:

  • Define the critical capabilities and tooling requirements to automate your mainframe delivery pipeline.

  • Decide where your code will reside and who (Clarive or a mainframe tool) will drive the pipeline build and deploy steps.

  • Integrate the pipeline with other functional areas, including related services, components and applications, so that releases will be a fully transactional change operation across many systems and platforms.

We hope you enjoyed it. Let us know if you’d like to schedule a demo or talk to one of our engineers to learn more about how other organizations have implemented the mainframe into the overall delivery pipeline.


Other posts in this series:

Bringing DevOps to the Mainframe pt 1
Bringing DevOps to the Mainframe pt 2: Tooling
Bringing DevOps to the Mainframe pt 3: Source code versioned in z/OS

Docker image management

The problem at hand

The situation with the DevOps toolchain is that it just has too many moving parts. And these moving parts have become a cumbersome part of delivering applications.

Have you stopped to think how many different tools are part of your pipeline? And how this is causing your delivery to slow-down?

These might be some of the problems you could be facing when setting up your continuous delivery pipeline:

  • Changes in the application require changes in the way it’s built/deployed

  • New components require new tools

  • Many build, test, and deploy tools have plenty of dependencies

The container bliss

Containers are basically lightweight kits that include pieces of software ready to run the tasks in your pipeline. When containers are used as part of the pipeline, they can include all the dependencies: code, runtime, system tools, system libraries, settings. With containers, your software will run the same pipeline no matter what is your environment. You can run the same container in development and staging environments without opening Pandora’s box.

Containers are the way to be consistent in your CI/CD and releasing/provisioning practices.

Other advantages of containers are:

  • Containers can be versioned

  • Containers can hold the most relevant DevOps infrastructure

  • Containers are cheap and fast to run

  • Ops can let Dev drive CI/CD safely (by giving Devs templatized containers)

Clarive and Docker: what a combo!

Docker is therefore a great companion to your DevOps stack. Docker containers allow your project and repository rulebooks to run pipelines alongside any necessary infrastructure without requiring additional software packages to be installed in the Clarive server. Clarive runs your DevOps pipelines within managed containers.

By using containers in Clarive you can:

  • Isolate your users from the server environment so that they cannot break anything.

  • Version your infrastructure packages, so that different versions of an app can run different versions of an image.

  • Simplify your DevOps stack by having most of your build-test-deploy continuous delivery workflows to run in one server (or more, if you have a cluster of Clarive servers), instead of having to install runners for every project everywhere.

Docker_inClarive

Clarive and Docker flowchart


Curating a library of DevOps containers

Using a registry is a good way of keeping a library of containers that target your continuous delivery automation. With Clarive you can maintain a copy of a local registry that is used exclusively for your DevOps automation.

Defining “natures”

Each repository in your project belongs or implement one or more natures. The nature of your code or artifacts define how they are going to be implemented. A nature is a set of automation and templates. These templates can use different Docker containers to run.

For example, your application may require Node + Python, so two natures. If you have these natures in templates they will be consistent and will help developers comply to a set of best practices on how to build, validate, lint, test and package new applications as they move to QA and live environments.

Running commands on other servers

Clarive uses Docker for running shell commands locally. That guarantees that rulebooks (in the projects .clarive.yml file) will not have access to the server(s) running your pipelines.

But you can still run shell commands in other servers and systems, such as Linux, Windows, various Unixes flavors and other legacy systems (including the mainframe!) using the host: option in the shell: command.

How do I use my own containers

If the container is not available on the Clarive server, the Clarive rulebook downloads the container from Docker Hub.

So, to use your own containers, you have two options:

  • Upload them to Docker Hub. Then use them from your rulebook. Clarive will download it on the first run.

  • Install it on your Clarive server. On the first run Clarive will build another version of your container based on Clarive’s default Dockerfile, called clarive/your container. You don’t need to prefix clarive/ into the name, that’s done for you automatically.

Docker containers in your pipeline

Manage all active Docker containers in your pipeline from within Clarive


Getting started today

Using containers is an important step in implementing a continuous delivery and continuous deployment process that is streamlined and avoids environment clutter.

Head over to our 30-day trial and let Clarive to run your DevOps automation in docker containers for better consistency and easy setup of your temporary environments.


Learn more about Clarive Docker admin interface with this blog post and learn how to manage containers and docker images.


Mainframe Tooling
 

In this second part of this blog series we will detail the mainframe features you can find in Clarive and how they are integrated into the system.

Clarive Mainframe Features

Clarive manages all aspects of the z/OS code lifecycle:

  • Sending files to z/OS partitions
  • Character translation maps and codepages
  • Identify relationships – impact analysis
  • JCL Template Management
  • Submit JCL
  • Nested JCL Management and synchronous – asynchronous queue control
  • Retrieve Job Spool output and parse results

Integration rules

Clarive z/OS features 3 entirely different integration points with the mainframe. Each integration feature serves a specific purpose

  • Job queue access – to ship files and submit jobs into the z/OS job queue in batch mode. Clarive will track all nested jobs and parse results into the job tree.

  • ClaX Agent – for delivering files into Datasets and/or OMVS partitions and executing z/OS processes online. This is the preferred way of running REXX scripts sent from Clarive to the mainframe. Access z/OS facilities such as SDSF®,ISPF®, VSAM® data records or RACF®.

  • Webservices Library – for writing code that initiates calls from the mainframe directly into Clarive using TCP/IP sockets and the RESTful webservices features of the Clarive rules.

Clarive to Mainframe Integration Point

Clarive to Mainframe Integration Point

Tool considerations

Clarive is a tool that allows enterprise companies to implement an end-to-end solution to control the software lifecycle providing countless out-of-the-box functionalities that help solving any complex situation (automation, integration with external tools, critical regions, manual steps through the process, collaboration, etc.)

CCMDB – Configuration Items

In Clarive any entity that is part of the physical infrastructure or the logical lifecycle is represented as a configuration item (CI). Servers, projects/applications, sources repositories, databases, users, lifecycle states, etc. are represented as CIs in Clarive under the name Resource.

Any resource can have multiple relationships with other resources (i.e. an application is installed in one server in production, a user is developer of an application, the Endevor “system x/subsystem y” combination is the source code repository related to an application, etc.)

The graph database made of this entities and relationships is Clarive’s Change oriented Configuration Management Database (CCMDB). The CCMBD is used to keep the whole system configuration as well as to do impact analysis, infrastructure requests management, etc.

CCMDB

CCMBD navigation

 

Natures / Technologies

Clarive natures are special CIs in Clarive that automate the identification of technologies to be deployed by a deployment job. A nature can be detected by
file path/name (i.e. Nature SQL: *.sql), by project variable values (i.e. ${weblogic}:Y) or by parsing the changed files code (i.e. COBOL/DB2: qr/EXEC SQL/)

Natures list

Natures list

 

JES spool monitoring

Clarive will take care of downloading and parsing the spool output when submitting a JCL in z/OS to split the DDs available in the output and to identify and use the return codes of all steps executed in the JCL.

JES output viewer

JES output viewer

 

Calendaring / Deployment scheduling – Calendar slots

Any deployment job in Clarive will be scheduled and Clarive will provide the available slots depending on the infrastructure affected by the deployment. Infrastructure administrators can define these slots related to any CCMDB level (environment, project, project groups, server, etc…)

Calendar slots definition

Calendar slots definition

 

Rollback

Clarive rule will allow the administrator to define the way to rollback changes. In either Endevor and ChangeMan ZMF it will execute a Backout operation on each package included in the job.

Rollback control

Rollback control

 

Next Steps

Features are an important step when picking the right tool to bring DevOps to the mainframe.

In the next 2 installments of this series we will review how Clarive can deploy mainframe artifacts (or elements), either by driving popular z/OS SCM tool such as CA Endevor or Serena ChangeMan, or replacing them with Clarive’s own z/OS deployment agents.


Read the first post of this series and learn more on how to bring DevOps to the mainframe.



Clarive Test

A test plan is an essential part of software development. It is a must if you want to get devs and ops people on the same page. As a guide and a workflow, it reinforces your projects success detecting potential flaws in advance. Tracking is also an important part of the testing process, as changes are applied, the test plan should be updated.

Unit test, code QA, performance test, integration and regression test are some of the most common types of software testing, and in some cases they are even compulsory to apply as a previous step to a production deployment.

With Clarive you can create a QA workflow that combines manual and automated steps, creating test plans automatically on a pull-request/merge-request, which in Clarive can actually be any changeset (user stories, features, bugfixes, etc.). Test plans can then be used to automate test validation.

Like in all software development processes, it’s necessary that each code revision go through a proper testing process to ensure the quality of the product.

Automating test plan creation

You can also create test plans automatically along with the included changes in the version to be released. This feature is capable of detecting the modified files by the developer, create a plan with the test cases (automated and manual tests) that have impact over the modified/created functions and link them to the release. Clarive will only add the test cases that are directly affected by one of the functionalities modified in the release.

This way Clarive creates a test plan suitable for each release and if the user wishes to, automate the test cases and depending on the results execute the required actions.


Download our whitepaper: The Value of DevOps for Test & Quality Managers and learn more about how to minimize the risk of product failure with Clarive.


Elixir logo

The DevOps movement in general, tends to exclude any technologies that are outliers to the do-it-yourself spirit of DevOps. This is due to the nature of how certain technologies are closed to developer-driven improvements, or roles are irreversibly inaccessible to outsiders.

That’s not the case in the mainframe. The mainframe is armed with countless development tools and programmable resources that rarely failed to enable Dev to Ops processes.

Then why DevOps practices have not prospered in the mainframe?

  • Ops are already masters of any productive or pre-productive environments – so changing the way developer teams interact with those environments require more politics than technology and are vetted by security practices already in place.
  • New tools don’t target the mainframe – the market and open source communities have focused first on servicing Linux, Windows, mobile and cloud environments.
  • Resistance to change – even if there were new tools and devs could improve processes themselves, management feels that trying out new approaches, especially those that go “outside the box”, could end up putting these environments, and mission-critical releases at risk.

Organizations want to profit from DevOps initiatives that are improving the speed and quality of application delivery in the enterprise at a vertiginous pace. But how can they leverage processes that are already in place with the faster and combined pipelines setup in the open side of the house?

Enter Clarive for z/OS

Our clients have been introducing DevOps practices to the mainframe for many years now. This has been made possible thanks to the well-known benefits of accepting and promoting the bimodal enterprise.

There are two approaches that can be used simultaneously in accomplishing:

  • Orchestrate mainframe tools and processes already in place – driving and being driven by the organization’s delivery pipeline
  • Launch modernization initiatives that change the way Dev and Ops deliver changes in the mainframe

Business Benefits bringing DevOps to the Mainframe

The benefit is simple. Code that runs in the mainframe is expensive and obscure. By unearthing practices and activities, organizations gain valuable insight that can help transform the z/OS-dependent footprint into a more contained and flexible part of the pipeline with these key benefits:

Coordinate and Speed-up Application Delivery

Mainframe systems don’t run in isolation. The data it manages and the logic it implements are shared as a single entity throughout the enterprise by applications in the open, cloud and even mobile part of the organization. Making changes that disrupt different parts of this delicate but business-critical organism needs to be coordinated at many phases, from testing to UATs to production delivery. Delivering change as a single transactional pipeline has to be a coordinated effort both forward and backwards.

End-to-End Visibility

DevOps practices perceive the mainframe as a closed box that does not play well with activities that target better visibility and end-to-end transparency. Having dashboards and reports that can work as input and output from the mainframe release processes into other pipelines will help deliver change.

Run a Leaner Operation and Avoid Waste

Creating mainframe processes that are part of the bigger picture help determine where constraints may lay and what parts of the pipeline may be deemed obsolete or become bottlenecks.

Lower Release Costs

Mainframe tools are expensive and difficult to manage. MIPS and processing in the mainframe may be capped and new processes could create unwanted expenses. Relying more on tools that drive the mainframe from Linux may in return translate into significant per release cost savings, encouraging a more continuous release process.

Use Cases

The following is a list of the most relevant benefits of Clarive z/OS and popular use cases that our clients have implemented using the Clarive z/OS platform and tools:

  • Compile and link programs using JCL preprocessed templates. Deploy DB2 items directly to the database.
  • Compile related COBOL programs when Copybooks change
  • Total control what is deployed to each environment at a given time
  • Schedule jobs according to individualized release and availability calendars and windows
  • Request approval for critical deployment windows or sensitive applications or items
  • Keep the lifecycle in sync with external project and issue management applications
  • Run SQA on the changes promoted. Block deployment if a minimum score has not been reached
  • Reliably rollback changes in Production, replacing previous PDS libraries with the correct ones
  • Provision CICS resources on request by users

Stay tunned for more of these DevOps for mainframe blog series!


Try Clarive now and start bringing DevOps to the mainframe now.


This release contains a lot of minor fixes and improvements from 7.0.12. It is also focus on refactoring interface improving the kanban boards.

Git repositories navigation on a tab

In Clarive 7.0.13 you can find a new Git repository navigation panel completely refactored. You can view sources, navigate branches and tags, compare references and much more.

To access the new interface, just navigate to the project in the left panel, expand it and click on the repository node.

Repository Navigation

Load default data by profile

Now any Clarive profile (a profile is a predefined set of topic categories, rules and roles that can be loaded in Clarive) can include default data as part of it.

ClariveSE profile now includes a sample-html project and two releases with several changes on them. It also automates the launch of 3 deployment jobs to INTE, TEST, and PROD.

To get the profile and the default sample data installed, execute cla setup <profile> and answer yes to the question Load default data?. Once you start the Clarive server it will automatically load the profile and the default data.

Notes

Kanban Board improvements

Custom card layout

You can now configure the layout of the cards of your Kanban Boards to show the information that you really want to focus on. To configure the layout, go to the board Configuration and select Cards Layout.

Cards Layout

Auto refresh

In the Quick View options panel (click on View button), now you’ll find a switch to toggle the Auto Refresh for this board. It will be updated with changes in the topics shown whenever the board tab is activated.

Auto refresh

Save quick view by user

In Clarive 7.0.13 the options selected in the quick view menu will be saved locally in your browser storage so every time you open the board it will use the last swimlanes, autorefresh, cards per list, etc. configuration you used.

Predefined statuses by list

Whenever you create a new board, it will be created with three default lists and now it will assign default statuses to these three lists with the following rules:

  • New: Initial statuses
  • In Progress: Normal statuses
  • Done: Final and Cancelled statuses

Killtree when job is cancelled

One of the most important improvements of Clarive 7.0.13 is the ability to kill/cancel the remote processes being executed by a job when this is canceled from the interface.

Auto Important

You can read about this new feature in this blog post

Improvements and issues resolved

  • [ENH] Git repositories navigation on a tab
  • [ENH] Clax libuv adaptation
  • [ENH] NPM registry directory new structure
  • [ENH] Add rulebook documentation to service.artifacts.publish
  • [ENH] Return artifact url on publish
  • [ENH] Invite users to Clarive
  • [ENH] Load default data by profile
  • [ENH] Users can choose shell runner for rulebooks
  • [ENH] Kill job signal configured in yml file
  • [ENH] Add default workers configuration to clarive.yml file
  • [ENH] Boards shared with “ALL” users
  • [ENH] Kanban custom card fields
  • [ENH] Killtree when job is cancelled
  • [ENH] Kanabn boards auto refresh
  • [ENH] Make sure to save kanban quick view session
  • [ENH] Filter data according to filter field in Topic Selector fieldlet
  • [ENH] Make sure new created boards have default lists
  • [ENH] Add date fields to card layout configuration
  • [FIX] Check user permissions in service.topic.remove_file
  • [FIX] Make sure user with permissions can access to rule designer
  • [FIX] Make sure CI permissions are working correctly
  • [FIX] Make sure that the ci grid is updated after the ci is modified
  • [FIX] Control exception when running scripts.
  • [FIX] Change project_security structure on user ci
  • [FIX] User without project field permissions can edit the topic
  • [FIX] Make sure React apps work in IE 11
  • [FIX] Show cis in create menu (standard edition)
  • [FIX] Administrator should be able to delete artifacts in ClariveSE
  • [FIX] When publishing NPM packages with scopes tarball is empty
  • [FIX] Make sure default values from variables are used when adding them
  • [FIX] Make sure notifications are sent only to active users
  • [FIX] Make sure to show username in “Blame by time” option for rules versions
  • [FIX] Remove default values when changing type of variable resource
  • [FIX] Allow single mode in variables resources
  • [FIX] Escape “/” in URLs for NPM scoped packages from remote repositories
  • [FIX] Avoid console message when opening a variable resource with cis set as default values
  • [FIX] Regexp for scoped packages should filter ONLY packages, not tgzs
  • [FIX] Refresh resources from url
  • [FIX] Create resource from versioned tab
  • [FIX] Make sure remote script element always display a final message
  • [FIX] Save variable when deleted default value field in a variable resource
  • [FIX] Make sure topic’s hidden fields are available as topicfields bounds
  • [FIX] Save resource when it does not have to validate fields
  • [FIX] Make sure projects can be added as kanban swimlanes
  • [FIX] Make sure changeset with artifact revision attached can be opened
  • [FIX] Make sure narrow menu repository navigation show changes related to branch
  • [FIX] Formating event data if fail service used
  • [FIX] Make sure that the chosen element is always selected in the rule tree.
  • [FIX] Reload data resource when refreshing
  • [FIX] Job distribution and las jobs dashlets should filter assigned projects to user
  • [FIX] Make sure user combo not have avaible grid mode in topic.
  • [FIX] Make sure that system user are showed in combo users
  • [FIX] Display column data in edition mode for a Topic Selector fieldlet in a topic
  • [FIX] Filter projects in grids by user security
  • [FIX] Make sure in topic selector combo all height size are available
  • [FIX] Ship remote file: show log in several lines
  • [FIX] Skip job dir removal in rollback
  • [FIX] Remove FilesysRepo Resource
  • [FIX] Remove permissions option from user menu
  • [FIX] Make sure when maximized description and choose back in the browser screen layout are showed well
  • [FIX] Remove session when user get deactivated
  • [FIX] Resources concurrency
  • [FIX] Validate CI Multiple option just with type ci variables
  • [FIX] Resource not saved when validation fails
  • [FIX] Make sure that the combos search has an optimal performance.
  • [FIX] Make sure ldap authentication returned messages are available in stash
  • [FIX] Show date and time in fieldlet datetime
  • [FIX] User session should not be removed on REPL open
  • [FIX] User with action.admin.users should be able to edit users
  • [FIX] Make username available in dashboard rules execution
  • [FIX] Make sure collapsing lists saved in user session correctly

Ready to upgrade?

Just follow the standard procedure for installing the new version. Click here to get it from our Install page.

Acknowledgments

Join us in our Community to make suggestions and report bugs.

Thanks to everyone who participated there.


Try Clarive now and start improving your DevOps practices.


Elixir logo

As you can see in our previous posts, having a complete pipeline to introduce DevOps in your day-to-day live is easy with Clarive’s Rulebook.

You just have to follow these three simple steps:
1. Get your 30-day trial
2. Upload your code to your Clarive Project repository.
3. Prepare your rulebook, push your commit and enjoy! (oops, maybe four steps would have been better :))

So let’s get down to business: we will detail the needed code.

Defining our variables

First, we declare the variables that will be used throughout our pipeline process.

vars:
  - workspace: "${project}/${repository}"
  - server:  https://<my-clarive>.clarive.io
  - art_path: ${server}/artifacts/repo/${project}

Building our application

In this step, we choose Elixir Docker image using Mix as a building tool.

build:
  do:
    - image:
         name: 'elixir'
         runner: 'bash'

     - shell [Compile application]: |
         cd {{ workspace }}
         mix compile
         tar cvf ${project}_${job}.tar _build/dev/lib/

And publish the compiled application to our artifact repository.

    - artifact_repo = publish:
        repository: Public
        to: '${art_path}'
        from: '{{ workspace }}/${project}_${job}.tar'
    - log:
        level: info
        msg: Application build finished

Ready to test

As long as we have our own application tests, this step is as simple as running the right command.

test:
  do:
    - image:
         name: 'elixir'
         runner: 'bash'

     - shell: |
         cd {{ workspace }}
         mix test

Deploy wherever we want

Now, it’s time to choose where our app will run. For example, send the tar file to another server and run the app.

deploy:
  do:
   - ship:
       from: '${art_path}/${project}_${job}.tar'
       to: /tmp/remotepath/
       host: ${remote_server}
  - shell:
      cd /tmp/remotepath/
      tar -xvf ${project}_${job}.tar
      mix run

This remote_server could be an AWS instance in PROD environment or another Docker container just to QA.

Happy Ending

Now with our .yml file already prepared, we can use Clarive’s interface to visualize the steps that will follow the finalizing of the rulebook.
To start the deployment, we only need to perform the push on the repository, just as we have seen before in other posts. When performing the push, Clarive automatically creates a deployment (Continuous Integration) and executes all the code found in the .clarive.yml.


Visit Clarive documentation to learn more about the features of this tool.


In this video we can see how Clarive checks the availability of certain versions of applications (BankingWeb 2.1.0 and ComplexApp 1.2.0) needed in an environment before deploying a version of another application (ClientApp 1.1.0).

We could consider the applications(BankingWeb and ComplexApp) as pre-requisites for the actual application (ClientApp) for which the deployment is requested.

When the required applications versions (BankingWeb 2.1.0 and ComplexApp 1.2.0) are not in the target environment yet, Clarive will block the deployment until the applications are either deployed first (in a separate job) or added to the same deployment job as the application requiring the pre-requisite applications.


Start your DevOps journey to continuous delivery with Clarive now. Ge your 30-day trial here.