Source Code Maturity levels

Have you jumped into DevOps wagon already? You probably have. But perhaps you still not sure if you are lacking a certain tool in your toolbox if you are working currently with DevOps.

Or maybe your organization or team is starting to plan to fully embrace DevOps and your team is researching what is exactly what to need to install in order to have the perfect toolchain. Perhaps you have a gap in some processes that you are not even aware of. Establishing a good and solid DevOps toolchain will help determine ahead of time the grade of the success of your DevOps practices.

In this blog post, we will be exposing maturity level checklists for different DevOps areas so you have an idea where you at in terms of Continuous Delivery.

We will review the maturity levels from the following DevOps aspects:

  • Source code management
  • Build automation
  • Testing
  • Managing database changes
  • Release management
  • Orchestration
  • Deployment and provisioning
  • Governance, with insights

Source code management tool

Commonly known as repositories. It works as a version control and can be used to keep track of changes in any set of files. As a distributed revision control system it is aimed at speed, data integrity, and support for distributed, non-linear workflows.

This is the maturity level checklist. we go from a none or low maturity level to a high maturity state:

  • No version control
  • Basic version control
  • Source/library dependency management
  • Topic branches flow
  • Sprint/project to branch traceability

Source Code Maturity levels

Build automation tool

Continuous Integration (CI) is a software development practice that aims for a frequent integration of individual pieces of work. Commonly each person integrates at least once per day giving place to several integrations during the day. Each integration should be verified by an automated Build Verification Test (BVT). These automated tests can detect errors just in time so they can be fixed before they create more problems in the future. This helps to reduce a lot of integration issues since this practice allows to develop faster and in a more efficient way.
This is the automation maturity checklist to see how you are doing in your CI:

  • No build automation. Built by hand. Binary check-in.
  • Build automated by central system
  • Reusable build across apps/projects
  • Continuous/nightly builds
  • Feedback loop for builds

Automation Maturity Levels

Testing framework

Testing automatization can be in code, systems, service etc. This will allow the testing each modification made in order to guarantee a good QA. Even the daily or weekly release of code will produce a report that will be sent every early morning. To accomplish this you can install the Selenium app in Clarive.

This checklist will help to determine your testing practices level:

  • No tests
  • Manual tests
  • Automated unit/integration tests
  • Automated interface tests
  • Automated and/or coordinated acceptance tests
  • Test metrics, measurements, and insights
  • Continuous feedback loop and low test failure

Testing Maturity Levels

Database Change Management

It’s important to make sure database changes be taken into consideration when releasing to production. Otherwise, your release team will be working late at night trying to finish up a release with manual steps that are error-prone and nearly impossible to rollback.

Check what is your team’s database management current state:

  • Manual data/schema migrations
  • Automated un-versioned data/schema migrations
  • Versioned data/schema migrations
  • Rollback-enabled data/schema migrations

Database matutity levels

Since databases schema changes are sometimes delicate, make sure to include your DBA team into the peer review process, so that changes are 1) code; 2) can be merged and patched; 3) can be code reviewed.

Release Management and Orchestration

You can fully orchestrate tools that are involved in the process and manage your release milestones and stakeholders with Clarive.

Imagine that a developer makes a change in the code after this happens you need to promote the code to the integration environments, send notifications to your team members and run the testing plan.

Are you fully orchestrating your tools? Find out with this checklist:

  • Infrequent releases, releases need manual review and coordination
  • Releases are partially automated but require manual intervention
  • Frequent releases, with defined manual and automated orchestration and calendaring
  • Just-in-time or On-demand releases, every change is deployed to production

Orchestration Maturity Levels

Deployment tool

Deploying is the core of how you release your application changes.

How is your team deploying?:

  • Manual deployment
  • Deployment with scripts
  • Automated deployment server or tool
  • Automated deployment and rollback
  • Continuous deployment with canary, blue-green and feature-enabling technology

Deployment Maturity Levels

Provisioning

As part of deployment, you should also review your provisioning tasks and requirements. Remember that it’s important to provision the application infrastructure for all required environments, keep environment configuration in check and dispose of any intermediate environments in the process.

Yes, provision has also several maturity levels:

  • You provision environments by hand
  • Environment configuration with scripts as part of deployment
  • Provisioning of disposable environments with every deployment
  • Full provisioning, disposing and infrastructure configuration as part of deployment
  • Full tracking of environment-application dependencies and cost management

Provisioning Maturity Levels

We have come a long way doing this with IaC (Infrastructure as Code). Nowadays a lot can be accomplished with less pain using technologies such as containers and serverless, but you still need to coordinate all cloud (private and public) and related dependencies, such as container orchestrators.

In your path to provision automation and hands-free infrastructure, make sure you have a clear (and traceable) path to the Ops part of your DevOps team or organization, making sure to avoid bottlenecks when infrastructure just needs a magic touch of the hand. One way of accomplishing that is to have a separate stream or category of issues assigned to the DevOps teams in charge of infrastructure provisioning. We’ll cover that on a later blog post.

With the right reports, you’d be amazed by how many times releases get stuck in infrastructure provisioning hell…

Governance

Clarive has also productivity and management tools such as with Kaban swimlanes, planning, reports and dashboards that give managers tools to identify problems and teams a way to quickly check overall performance of the full end-to-end process.

Here are the key points to make sure you evolve the overall governance of your DevOps process:

  • There is no end-to-end relationship between request (why) and release (when, how, what)
  • Basic Dev-to-Ops traceability, with velocity and release feedback
  • Full traceability from request to deployment
  • Immediate feedback and triggers

Maturity Levels of Source Code Management

There you go, let’s devops like the grownups do

In this post, we have exposed the main Continuous Delivery aspects that every DevOps team should be looking forward to improve and their respective readiness levels. So go with your team and start planning a good DevOps adoption plan 😉


Schedule a demo with one of our specialists and start improving your devops practices.


In this last installment of this series, we will review how Clarive can replace z/OS SCM tools such as CA Endeavor or Serena ChangeMan with a global DevOps pipeline that can drive unified deployments across all platforms.

Source code versioned and deployed by Clarive

Clarive can deploy source code managed outside the mainframe.

Selecting elements to deploy

In this article, z/OS artifacts (programs, copy books, JCLs, SQLs, etc.) can be versioned in Clarive’s Git, but it could be done with any other VCS for that matter. The developer will select the versions of elements to deploy from the repository view attaching it to the Clarive changeset.

Versions associated to changesets

Versions associated to changesets

 

Preparing mainframe elements

Clarive will checkout the selected version of the source code to deploy in the PRE step of the deployment job and will perform the needed activities to check the code quality (i.e. execute static code analysis, check vulnerabilities, etc.) and to identify the type of compilation to be executed (i.e. decide the type of item depending on the naming convention, parse the source code to decide if DB2 precompilation is needed, etc.).

Depending on the elements to deploy, different actions will be executed:

  • Copy books, JCLs and all other elements that don’t need compilation will be shipped to the destination PDSs

  • Programs will be precompiled and compiled as needed and the binaries will be kept in temporary load PDSs

Clarive rule will decide what JCL template will be used to prepare/deploy each type of element and will submit the JCL after replacing the variables with their actual values depending on the deployment project and environment.

Different z/OS element natures

Different z/OS element natures

 

Deploying elements

Depending on the elements to deploy, different actions will be executed:

  • Programs will be shipped to the destination PDSs and binded as needed.

A Clarive rule will decide what JCL template will be used to deploy each type of element and will submit the JCL after replacing the variables with their actual values depending on the deployment project and environment.

Deploy and bind example

Deploy and bind examples

As usual, Clarive will keep track of any nested JCL jobs that may run associated with the parent JCL.

Rollback

Clarive will start a rollback job whenever an error condition occurs in the rule execution. It will automatically check out and deploy the previous version of the elements available in the source repository.

Conclusion and the next steps

In this DevOps for the Mainframe series, we have exposed the key features of Clarive for bringing mainframe technologies into the full, enterprise-wide continuous delivery DevOps pipeline.

Once an organization has decided to modernize mainframe application delivery, there is a set of recommended steps:

Establish Prerequisites

The first step IT leaders need to take before modernizing mainframe application delivery is to evaluate whether the correct prerequisites are in place or in progress. To successfully implement a mainframe application delivery tool like Clarive requires either an existing process or the will to implement one.

Assess Operational Readiness

Many organizations discover too late that they have underestimated— sometimes dramatically—the investment needed in people, processes, and technology to move from their current environment for modernizing mainframe application delivery. The early readiness assessment is essential to crafting a transition plan that minimizes risk and provides cross-organizational visibility and coordination for the organization’s cloud initiatives. Many organizations already have some sort of mainframe delivery tools in place.

When key processes have been defined within such a framework, optimizing and transforming them to an enterprise-wide delivery is significantly easier, but still need to be integrated into a single Dev to Ops pipeline, as mainframe delivery requests typically tend to run outside the reach of release composition and execution.

Prepare the IT Organization for Change

This concludes our blog series on deploying to the mainframe.

IT leaders should test the waters to see how ready their own organization is for the change the way the mainframe application delivery processes fit into the picture. IT managers must communicate clearly to staff the rationale for the change and provide visibility into the impact on individual job responsibilities. It is particularly important that managers discuss any planned reallocation of staff based on reductions in troubleshooting time to alleviate fears of staff reductions.

Mainframe aspects

In this series we reviewed many different aspects for fully bringing your mainframe system up to speed with your enterprise DevOps strategy:

  • Define the critical capabilities and tooling requirements to automate your mainframe delivery pipeline.

  • Decide where your code will reside and who (Clarive or a mainframe tool) will drive the pipeline build and deploy steps.

  • Integrate the pipeline with other functional areas, including related services, components and applications, so that releases will be a fully transactional change operation across many systems and platforms.

We hope you enjoyed it. Let us know if you’d like to schedule a demo or talk to one of our engineers to learn more about how other organizations have implemented the mainframe into the overall delivery pipeline.


Other posts in this series:

Bringing DevOps to the Mainframe pt 1
Bringing DevOps to the Mainframe pt 2: Tooling
Bringing DevOps to the Mainframe pt 3: Source code versioned in z/OS

Elixir logo

The DevOps movement in general, tends to exclude any technologies that are outliers to the do-it-yourself spirit of DevOps. This is due to the nature of how certain technologies are closed to developer-driven improvements, or roles are irreversibly inaccessible to outsiders.

That’s not the case in the mainframe. The mainframe is armed with countless development tools and programmable resources that rarely failed to enable Dev to Ops processes.

Then why DevOps practices have not prospered in the mainframe?

  • Ops are already masters of any productive or pre-productive environments – so changing the way developer teams interact with those environments require more politics than technology and are vetted by security practices already in place.
  • New tools don’t target the mainframe – the market and open source communities have focused first on servicing Linux, Windows, mobile and cloud environments.
  • Resistance to change – even if there were new tools and devs could improve processes themselves, management feels that trying out new approaches, especially those that go “outside the box”, could end up putting these environments, and mission-critical releases at risk.

Organizations want to profit from DevOps initiatives that are improving the speed and quality of application delivery in the enterprise at a vertiginous pace. But how can they leverage processes that are already in place with the faster and combined pipelines setup in the open side of the house?

Enter Clarive for z/OS

Our clients have been introducing DevOps practices to the mainframe for many years now. This has been made possible thanks to the well-known benefits of accepting and promoting the bimodal enterprise.

There are two approaches that can be used simultaneously in accomplishing:

  • Orchestrate mainframe tools and processes already in place – driving and being driven by the organization’s delivery pipeline
  • Launch modernization initiatives that change the way Dev and Ops deliver changes in the mainframe

Business Benefits bringing DevOps to the Mainframe

The benefit is simple. Code that runs in the mainframe is expensive and obscure. By unearthing practices and activities, organizations gain valuable insight that can help transform the z/OS-dependent footprint into a more contained and flexible part of the pipeline with these key benefits:

Coordinate and Speed-up Application Delivery

Mainframe systems don’t run in isolation. The data it manages and the logic it implements are shared as a single entity throughout the enterprise by applications in the open, cloud and even mobile part of the organization. Making changes that disrupt different parts of this delicate but business-critical organism needs to be coordinated at many phases, from testing to UATs to production delivery. Delivering change as a single transactional pipeline has to be a coordinated effort both forward and backwards.

End-to-End Visibility

DevOps practices perceive the mainframe as a closed box that does not play well with activities that target better visibility and end-to-end transparency. Having dashboards and reports that can work as input and output from the mainframe release processes into other pipelines will help deliver change.

Run a Leaner Operation and Avoid Waste

Creating mainframe processes that are part of the bigger picture help determine where constraints may lay and what parts of the pipeline may be deemed obsolete or become bottlenecks.

Lower Release Costs

Mainframe tools are expensive and difficult to manage. MIPS and processing in the mainframe may be capped and new processes could create unwanted expenses. Relying more on tools that drive the mainframe from Linux may in return translate into significant per release cost savings, encouraging a more continuous release process.

Use Cases

The following is a list of the most relevant benefits of Clarive z/OS and popular use cases that our clients have implemented using the Clarive z/OS platform and tools:

  • Compile and link programs using JCL preprocessed templates. Deploy DB2 items directly to the database.
  • Compile related COBOL programs when Copybooks change
  • Total control what is deployed to each environment at a given time
  • Schedule jobs according to individualized release and availability calendars and windows
  • Request approval for critical deployment windows or sensitive applications or items
  • Keep the lifecycle in sync with external project and issue management applications
  • Run SQA on the changes promoted. Block deployment if a minimum score has not been reached
  • Reliably rollback changes in Production, replacing previous PDS libraries with the correct ones
  • Provision CICS resources on request by users

Stay tunned for more of these DevOps for mainframe blog series!


Try Clarive now. Get your custom cloud instance for free.


This release contains a lot of minor fixes and improvements from 7.0.12. It is also focus on refactoring interface improving the kanban boards.

Git repositories navigation on a tab

In Clarive 7.0.13 you can find a new Git repository navigation panel completely refactored. You can view sources, navigate branches and tags, compare references and much more.

To access the new interface, just navigate to the project in the left panel, expand it and click on the repository node.

Repository Navigation

Load default data by profile

Now any Clarive profile (a profile is a predefined set of topic categories, rules and roles that can be loaded in Clarive) can include default data as part of it.

ClariveSE profile now includes a sample-html project and two releases with several changes on them. It also automates the launch of 3 deployment jobs to INTE, TEST, and PROD.

To get the profile and the default sample data installed, execute cla setup <profile> and answer yes to the question Load default data?. Once you start the Clarive server it will automatically load the profile and the default data.

Notes

Kanban Board improvements

Custom card layout

You can now configure the layout of the cards of your Kanban Boards to show the information that you really want to focus on. To configure the layout, go to the board Configuration and select Cards Layout.

Cards Layout

Auto refresh

In the Quick View options panel (click on View button), now you’ll find a switch to toggle the Auto Refresh for this board. It will be updated with changes in the topics shown whenever the board tab is activated.

Auto refresh

Save quick view by user

In Clarive 7.0.13 the options selected in the quick view menu will be saved locally in your browser storage so every time you open the board it will use the last swimlanes, autorefresh, cards per list, etc. configuration you used.

Predefined statuses by list

Whenever you create a new board, it will be created with three default lists and now it will assign default statuses to these three lists with the following rules:

  • New: Initial statuses
  • In Progress: Normal statuses
  • Done: Final and Cancelled statuses

Killtree when job is cancelled

One of the most important improvements of Clarive 7.0.13 is the ability to kill/cancel the remote processes being executed by a job when this is canceled from the interface.

Auto Important

You can read about this new feature in this blog post

Improvements and issues resolved

  • [ENH] Git repositories navigation on a tab
  • [ENH] Clax libuv adaptation
  • [ENH] NPM registry directory new structure
  • [ENH] Add rulebook documentation to service.artifacts.publish
  • [ENH] Return artifact url on publish
  • [ENH] Invite users to Clarive
  • [ENH] Load default data by profile
  • [ENH] Users can choose shell runner for rulebooks
  • [ENH] Kill job signal configured in yml file
  • [ENH] Add default workers configuration to clarive.yml file
  • [ENH] Boards shared with “ALL” users
  • [ENH] Kanban custom card fields
  • [ENH] Killtree when job is cancelled
  • [ENH] Kanabn boards auto refresh
  • [ENH] Make sure to save kanban quick view session
  • [ENH] Filter data according to filter field in Topic Selector fieldlet
  • [ENH] Make sure new created boards have default lists
  • [ENH] Add date fields to card layout configuration
  • [FIX] Check user permissions in service.topic.remove_file
  • [FIX] Make sure user with permissions can access to rule designer
  • [FIX] Make sure CI permissions are working correctly
  • [FIX] Make sure that the ci grid is updated after the ci is modified
  • [FIX] Control exception when running scripts.
  • [FIX] Change project_security structure on user ci
  • [FIX] User without project field permissions can edit the topic
  • [FIX] Make sure React apps work in IE 11
  • [FIX] Show cis in create menu (standard edition)
  • [FIX] Administrator should be able to delete artifacts in ClariveSE
  • [FIX] When publishing NPM packages with scopes tarball is empty
  • [FIX] Make sure default values from variables are used when adding them
  • [FIX] Make sure notifications are sent only to active users
  • [FIX] Make sure to show username in “Blame by time” option for rules versions
  • [FIX] Remove default values when changing type of variable resource
  • [FIX] Allow single mode in variables resources
  • [FIX] Escape “/” in URLs for NPM scoped packages from remote repositories
  • [FIX] Avoid console message when opening a variable resource with cis set as default values
  • [FIX] Regexp for scoped packages should filter ONLY packages, not tgzs
  • [FIX] Refresh resources from url
  • [FIX] Create resource from versioned tab
  • [FIX] Make sure remote script element always display a final message
  • [FIX] Save variable when deleted default value field in a variable resource
  • [FIX] Make sure topic’s hidden fields are available as topicfields bounds
  • [FIX] Save resource when it does not have to validate fields
  • [FIX] Make sure projects can be added as kanban swimlanes
  • [FIX] Make sure changeset with artifact revision attached can be opened
  • [FIX] Make sure narrow menu repository navigation show changes related to branch
  • [FIX] Formating event data if fail service used
  • [FIX] Make sure that the chosen element is always selected in the rule tree.
  • [FIX] Reload data resource when refreshing
  • [FIX] Job distribution and las jobs dashlets should filter assigned projects to user
  • [FIX] Make sure user combo not have avaible grid mode in topic.
  • [FIX] Make sure that system user are showed in combo users
  • [FIX] Display column data in edition mode for a Topic Selector fieldlet in a topic
  • [FIX] Filter projects in grids by user security
  • [FIX] Make sure in topic selector combo all height size are available
  • [FIX] Ship remote file: show log in several lines
  • [FIX] Skip job dir removal in rollback
  • [FIX] Remove FilesysRepo Resource
  • [FIX] Remove permissions option from user menu
  • [FIX] Make sure when maximized description and choose back in the browser screen layout are showed well
  • [FIX] Remove session when user get deactivated
  • [FIX] Resources concurrency
  • [FIX] Validate CI Multiple option just with type ci variables
  • [FIX] Resource not saved when validation fails
  • [FIX] Make sure that the combos search has an optimal performance.
  • [FIX] Make sure ldap authentication returned messages are available in stash
  • [FIX] Show date and time in fieldlet datetime
  • [FIX] User session should not be removed on REPL open
  • [FIX] User with action.admin.users should be able to edit users
  • [FIX] Make username available in dashboard rules execution
  • [FIX] Make sure collapsing lists saved in user session correctly

Ready to upgrade?

Just follow the standard procedure for installing the new version. Click here to get it from our Install page.

Acknowledgments

Join us in our Community to make suggestions and report bugs.

Thanks to everyone who participated there.


Try Clarive now. Get your custom cloud instance for free.


Elixir logo

As you can see in our previous posts, having a complete pipeline to introduce DevOps in your day-to-day live is easy with Clarive’s Rulebook.

You just have to follow these three simple steps:
1. Get your free Clarive cloud instance
2. Upload your code to your Clarive Project repository.
3. Prepare your rulebook, push your commit and enjoy! (oops, maybe four steps would have been better :))

So let’s get down to business: we will detail the needed code.

Defining our variables

First, we declare the variables that will be used throughout our pipeline process.

vars:
  - workspace: "${project}/${repository}"
  - server:  https://<my-clarive>.clarive.io
  - art_path: ${server}/artifacts/repo/${project}

Building our application

In this step, we choose Elixir Docker image using Mix as a building tool.

build:
  do:
    - image:
         name: 'elixir'
         runner: 'bash'

     - shell [Compile application]: |
         cd {{ workspace }}
         mix compile
         tar cvf ${project}_${job}.tar _build/dev/lib/

And publish the compiled application to our artifact repository.

    - artifact_repo = publish:
        repository: Public
        to: '${art_path}'
        from: '{{ workspace }}/${project}_${job}.tar'
    - log:
        level: info
        msg: Application build finished

Ready to test

As long as we have our own application tests, this step is as simple as running the right command.

test:
  do:
    - image:
         name: 'elixir'
         runner: 'bash'

     - shell: |
         cd {{ workspace }}
         mix test

Deploy wherever we want

Now, it’s time to choose where our app will run. For example, send the tar file to another server and run the app.

deploy:
  do:
   - ship:
       from: '${art_path}/${project}_${job}.tar'
       to: /tmp/remotepath/
       host: ${remote_server}
  - shell:
      cd /tmp/remotepath/
      tar -xvf ${project}_${job}.tar
      mix run

This remote_server could be an AWS instance in PROD environment or another Docker container just to QA.

Happy Ending

Now with our .yml file already prepared, we can use Clarive’s interface to visualize the steps that will follow the finalizing of the rulebook.
To start the deployment, we only need to perform the push on the repository, just as we have seen before in other posts. When performing the push, Clarive automatically creates a deployment (Continuous Integration) and executes all the code found in the .clarive.yml.


Visit Clarive documentation to learn more about the features of this tool.


Continuous integration (CI) is now becoming a standard in all software projects. With this, a pipeline is executed for each new feature or change in code that is made, and this allows you to carry out a series of steps to ensure that nothing has been “broken”.

One of the main steps that needs to be undertaken in the CI is the implementation of tests, both unit tests and regression tests. The latter can be done in the nightly build that Clarive is responsible for launching on a daily basis, at the time that we pre-define within our Releases.

Unit tests must be run whenever there has been any change in the code. This brings us to our first step. The unit tests must be run with each change in source code using a transfer (Continuous Integration) and the tests will be launched in the nightly builds of each new Release. This will ensure that the new version of our product will go onto the market without any bugs or problems in the features that worked well in the current versions.

At the end of this post the reader should know how to:
Integrate tests into a Clarive rulebook.
Publish files in an artifact repository.
– Send notifications to users.

Ready to start

To start this step-by-step guide you need a copy of Clarive installed on your server (you can request a free one here).

Overview

For this post, we will use the following workflow:

1- User commit tests.
2- Clarive runs rulebook.
3- Mocha run tests.
4- Clarive post a message in Slack with the report.

For the first part of the development, we will assume that the developer has already written his tests. In this example we will use a simple “Hello world” and some example tests that you can find here and which will use the library expect.js, which will be installed during the running of the rulebook.

Workspace files:

In this example within our git repository we have the following file structure:

├── .clarive.yml
├── package.json
├── src
│   ├── add.js
│   ├── cli.js
│   ├── divide.js
│   ├── helloWorld.js
│   ├── multiply.js
│   ├── sort.js
│   └── subtract.js
└── test
    ├── helloWorld.tests.js
    ├── test-cli.tests.js
    └── test.tests.js

.clarive.yml: The rulebook file that Clarive runs
package.json: NPM configuration file. Within this file we have defined a command to run the tests that are found in the test folder:

"scripts": {
    "test": "mocha --reporter mochawesome --reporter-options reportDir=/clarive/,reportFilename=report,reportPageTitle=Results,inline=true"
  },

src: Folder where we find the source code.
tests: Folder where the test files are located.

Writing a rulebook

The rulebook will be responsible for running the tests and generating a report to notify the user of the results of the tests.

First phase: Running the tests

As we can see in the file skeleton clarive.yml, there is a step named TEST that we will use to run the tests:

test:
  do:
    - log:
        level: info
        msg: The app will be tested here

But first, let’s define some global variables:

vars:
  - workspace: "${project}/${repository}"
  - server:  https://<my-clarive>.clarive.io
  - art_path: ${server}/artifacts/repo/test-reports

We can see from the code that we are going to use node.js with mocha, chai and expectjs as libraries to run the tests. And thanks to Docker images, we can use a container with mocha and chai already installed, and this means that you only need to install expectjs using the rulebook.

We can now specify the Docker image we’re going to use during this TEST phase and then install the expectjs library from our working directory and, finally, update our package.json:

test:
  image: mocha-chai-report
  do:
    - shell:
        cmd: |
             cd ${workspace}
             npm install expect.js
             npm install

Next, we just need to run the tests. As we have already defined the command to run the tests in our json package, we only need to run npm test:

test:
  image: mocha-chai-report
  do:
    - shell:
        cmd: |
             cd ${workspace}
             npm install expect.js
             npm install
             npm test

Second phase: Publish report

When you finish running the tests, an HTML report will be generated with all the results, thanks to the mochawesome library installed in the Docker image. We will now publish this report in our artifact repository:

- publish_report = publish:
        repository: Test reports
        from: 'report.html'
        to: "test-reports/{{ ctx.job('name') }}/"
    - log:
        level: info
        msg: "Report generated successfully!"

Now, we are now going to complete the POST step. During this step we will inform the users that we want to complete the deployment.

Third phase: Notifications

We need to post the message in Slack, with a link to the report. To do this, we will have to have our WebHook configured in Slack and in Clarive:

Now all that remains is to add the sending of the message to the POST step:
From the Slack API any user can configure their own messages. In this case we have configured a message in which the user, on receiving the message, will be able to access the report from the same channel:

- slack_post:
          webhook: SlackIncomingWebhook-1
          text: ":loudspeaker: Job *{{ ctx.job('name') }}* finished."
          payload: {
              "attachments": [
              {
                "text": "Choose an action to do",
                "color": "#3AA3E3",
                "attachment_type": "default",
                "actions": [
                    {
                        "name": "report",
                        "text": "View test report :bar_chart:",
                        "type": "button",
                        "style": "primary",
                        "value": "report",
                        "url": "${art_path}/test-reports/{{ ctx.job('name') }}/report.html"
                    },
                    {
                        "name": "open",
                        "text": "Open Monitor in Clarive",
                        "type": "button",
                        "value": "open",
                        "url": "${server}/r/job/monitor"
                    }]
             }
           ]
        }

Lets make continuous integration!

Using Clarive, we can create a Story where we will attach all the code. If we haven’t already created a Project, we can create one using the Admin panel:

Then we create a repository of artifacts (a public one in this case):

Once we have carried out the previous steps we will create the Story in Clarive:

We’ll now change the state to “In Progress” and Clarive will automatically create a branch in the repository:

From the console we’ll clone the repository and change to the branch we’ve just created:

We’ll now place all our files in this branch and push them to the repository:

Clarive will automatically generate the deployment:

After finished, we will received the notification on the Slack channel that we’ve selected:

Finally, we can check test report in the artifact repository.

Next Steps:

In Clarive – After making this first contact with Clarive, our Story can be assigned to a Release (one that was previously created from the release menu) and, after changing the state to “Ready”, the developed feature will be automatically merged into the branch of therelease.

In Rulebook – Well, this is a simple example. The rulebook can be as complex as the user wants it to be, for example, by adding if statements, adding elements in the different deployments. For example, if something has gone wrong, create a new Issue in Clarive automatically, only run tests if the “tests/” folder has been modified in the commit…the possibilities are limitless!.


Visit Clarive documentation to learn more about the features of Clarive.



In this video we will create a webhook in our rulebook that provisions a VM in our Azure instance.


Following the instructions given in the previous blog “DevOps Webservices: a Clarive primer”, we show how easy is everything in Clarive:


Get an early start and try Clarive now. Get your custom cloud instance for free.


We’re pleased to present our new release Clarive 7.0.12. This release contains a variety of minor fixes and improvements from 7.0.11. It is focused on refactoring interface.

NPM Artifact Repository management

Clarive team is proud to release this version with artifact repository enhancement. This new functionality allows NPM packages management.

  • Now is possible to surf the NPM repository folders through the artifacts interface, content visualization and distinguishing the new packages that have been included in the repository.

Create artifacts tags in order to sort them out

  • Use Clarive NPM repositories that serve as proxy to the global NPM store in npmjs.org or just use them as local, so you can control which public packages are available for your developers
npm install angularjs --registry http(s):///artifacts/repo/
  • Use Clarive Groups of repositories to categorize packages and access several local repositories with just one registry
npm install angularjs --registry http(s)://<clarive_url>/artifacts/repo/<npm_repo_group>
  • Directly publish in Clarive NPM repositories with npm publish command
npm publish ./ --registry http(s):///artifacts/repo/
  • You can also publish packages through rulebooks
do:
- publish:
repository: '' # repository name
from: ''
to: ''

Take a look to our docs website and learn how to configure your artifact repository in Clarive

NPM repository events exists in Clarive. So, for example, when the *npm publish* command is executed in one repository, the artifact will be published in Clarive, sending a notification email to your team. For more information go to our documentation and learn all you can do with events.

Improvements and issues resolved

  • [ENH] – Project menu revamp
  • [ENH] – Plugins code structure and formating
  • [ENH] – Owner can cancel and restart jobs
  • [ENH] – Interface plugins standarization
  • [FIX] – Docker images cache management
  • [FIX] – Show subtask editable grid only during edition
  • [FIX] – Differentiate environments and variables in menu

Ready to upgrade?

Just follow the standard procedure for installing the new version. Click here to get it from our Install page.

Acknowledgements

Join us in our Community to make suggestions and report bugs.

Thanks to everyone who participated there.


Get an early start and try Clarive now. Get your custom cloud instance for free.