Source Code Maturity levels

Have you jumped into DevOps wagon already? You probably have. But perhaps you still not sure if you are lacking a certain tool in your toolbox if you are working currently with DevOps.

Or maybe your organization or team is starting to plan to fully embrace DevOps and your team is researching what is exactly what to need to install in order to have the perfect toolchain. Perhaps you have a gap in some processes that you are not even aware of. Establishing a good and solid DevOps toolchain will help determine ahead of time the grade of the success of your DevOps practices.

In this blog post, we will be exposing maturity level checklists for different DevOps areas so you have an idea where you at in terms of Continuous Delivery.

We will review the maturity levels from the following DevOps aspects:

  • Source code management
  • Build automation
  • Testing
  • Managing database changes
  • Release management
  • Orchestration
  • Deployment and provisioning
  • Governance, with insights

Source code management tool

Commonly known as repositories. It works as a version control and can be used to keep track of changes in any set of files. As a distributed revision control system it is aimed at speed, data integrity, and support for distributed, non-linear workflows.

This is the maturity level checklist. we go from a none or low maturity level to a high maturity state:

  • No version control
  • Basic version control
  • Source/library dependency management
  • Topic branches flow
  • Sprint/project to branch traceability

Source Code Maturity levels

Build automation tool

Continuous Integration (CI) is a software development practice that aims for a frequent integration of individual pieces of work. Commonly each person integrates at least once per day giving place to several integrations during the day. Each integration should be verified by an automated Build Verification Test (BVT). These automated tests can detect errors just in time so they can be fixed before they create more problems in the future. This helps to reduce a lot of integration issues since this practice allows to develop faster and in a more efficient way.
This is the automation maturity checklist to see how you are doing in your CI:

  • No build automation. Built by hand. Binary check-in.
  • Build automated by central system
  • Reusable build across apps/projects
  • Continuous/nightly builds
  • Feedback loop for builds

Automation Maturity Levels

Testing framework

Testing automatization can be in code, systems, service etc. This will allow the testing each modification made in order to guarantee a good QA. Even the daily or weekly release of code will produce a report that will be sent every early morning. To accomplish this you can install the Selenium app in Clarive.

This checklist will help to determine your testing practices level:

  • No tests
  • Manual tests
  • Automated unit/integration tests
  • Automated interface tests
  • Automated and/or coordinated acceptance tests
  • Test metrics, measurements, and insights
  • Continuous feedback loop and low test failure

Testing Maturity Levels

Database Change Management

It’s important to make sure database changes be taken into consideration when releasing to production. Otherwise, your release team will be working late at night trying to finish up a release with manual steps that are error-prone and nearly impossible to rollback.

Check what is your team’s database management current state:

  • Manual data/schema migrations
  • Automated un-versioned data/schema migrations
  • Versioned data/schema migrations
  • Rollback-enabled data/schema migrations

Database matutity levels

Since databases schema changes are sometimes delicate, make sure to include your DBA team into the peer review process, so that changes are 1) code; 2) can be merged and patched; 3) can be code reviewed.

Release Management and Orchestration

You can fully orchestrate tools that are involved in the process and manage your release milestones and stakeholders with Clarive.

Imagine that a developer makes a change in the code after this happens you need to promote the code to the integration environments, send notifications to your team members and run the testing plan.

Are you fully orchestrating your tools? Find out with this checklist:

  • Infrequent releases, releases need manual review and coordination
  • Releases are partially automated but require manual intervention
  • Frequent releases, with defined manual and automated orchestration and calendaring
  • Just-in-time or On-demand releases, every change is deployed to production

Orchestration Maturity Levels

Deployment tool

Deploying is the core of how you release your application changes.

How is your team deploying?:

  • Manual deployment
  • Deployment with scripts
  • Automated deployment server or tool
  • Automated deployment and rollback
  • Continuous deployment with canary, blue-green and feature-enabling technology

Deployment Maturity Levels

Provisioning

As part of deployment, you should also review your provisioning tasks and requirements. Remember that it’s important to provision the application infrastructure for all required environments, keep environment configuration in check and dispose of any intermediate environments in the process.

Yes, provision has also several maturity levels:

  • You provision environments by hand
  • Environment configuration with scripts as part of deployment
  • Provisioning of disposable environments with every deployment
  • Full provisioning, disposing and infrastructure configuration as part of deployment
  • Full tracking of environment-application dependencies and cost management

Provisioning Maturity Levels

We have come a long way doing this with IaC (Infrastructure as Code). Nowadays a lot can be accomplished with less pain using technologies such as containers and serverless, but you still need to coordinate all cloud (private and public) and related dependencies, such as container orchestrators.

In your path to provision automation and hands-free infrastructure, make sure you have a clear (and traceable) path to the Ops part of your DevOps team or organization, making sure to avoid bottlenecks when infrastructure just needs a magic touch of the hand. One way of accomplishing that is to have a separate stream or category of issues assigned to the DevOps teams in charge of infrastructure provisioning. We’ll cover that on a later blog post.

With the right reports, you’d be amazed by how many times releases get stuck in infrastructure provisioning hell…

Governance

Clarive has also productivity and management tools such as with Kaban swimlanes, planning, reports and dashboards that give managers tools to identify problems and teams a way to quickly check overall performance of the full end-to-end process.

Here are the key points to make sure you evolve the overall governance of your DevOps process:

  • There is no end-to-end relationship between request (why) and release (when, how, what)
  • Basic Dev-to-Ops traceability, with velocity and release feedback
  • Full traceability from request to deployment
  • Immediate feedback and triggers

Maturity Levels of Source Code Management

There you go, let’s devops like the grownups do

In this post, we have exposed the main Continuous Delivery aspects that every DevOps team should be looking forward to improve and their respective readiness levels. So go with your team and start planning a good DevOps adoption plan 😉


Schedule a demo with one of our specialists and start improving your devops practices.


Docker image management

The problem at hand

The situation with the DevOps toolchain is that it just has too many moving parts. And these moving parts have become a cumbersome part of delivering applications.

Have you stopped to think how many different tools are part of your pipeline? And how this is causing your delivery to slow-down?

These might be some of the problems you could be facing when setting up your continuous delivery pipeline:

  • Changes in the application require changes in the way it’s built/deployed

  • New components require new tools

  • Many build, test, and deploy tools have plenty of dependencies

The container bliss

Containers are basically lightweight kits that include pieces of software ready to run the tasks in your pipeline. When containers are used as part of the pipeline, they can include all the dependencies: code, runtime, system tools, system libraries, settings. With containers, your software will run the same pipeline no matter what is your environment. You can run the same container in development and staging environments without opening Pandora’s box.

Containers are the way to be consistent in your CI/CD and releasing/provisioning practices.

Other advantages of containers are:

  • Containers can be versioned

  • Containers can hold the most relevant DevOps infrastructure

  • Containers are cheap and fast to run

  • Ops can let Dev drive CI/CD safely (by giving Devs templatized containers)

Clarive and Docker: what a combo!

Docker is therefore a great companion to your DevOps stack. Docker containers allow your project and repository rulebooks to run pipelines alongside any necessary infrastructure without requiring additional software packages to be installed in the Clarive server. Clarive runs your DevOps pipelines within managed containers.

By using containers in Clarive you can:

  • Isolate your users from the server environment so that they cannot break anything.

  • Version your infrastructure packages, so that different versions of an app can run different versions of an image.

  • Simplify your DevOps stack by having most of your build-test-deploy continuous delivery workflows to run in one server (or more, if you have a cluster of Clarive servers), instead of having to install runners for every project everywhere.

Docker_inClarive

Clarive and Docker flowchart


Curating a library of DevOps containers

Using a registry is a good way of keeping a library of containers that target your continuous delivery automation. With Clarive you can maintain a copy of a local registry that is used exclusively for your DevOps automation.

Defining “natures”

Each repository in your project belongs or implement one or more natures. The nature of your code or artifacts define how they are going to be implemented. A nature is a set of automation and templates. These templates can use different Docker containers to run.

For example, your application may require Node + Python, so two natures. If you have these natures in templates they will be consistent and will help developers comply to a set of best practices on how to build, validate, lint, test and package new applications as they move to QA and live environments.

Running commands on other servers

Clarive uses Docker for running shell commands locally. That guarantees that rulebooks (in the projects .clarive.yml file) will not have access to the server(s) running your pipelines.

But you can still run shell commands in other servers and systems, such as Linux, Windows, various Unixes flavors and other legacy systems (including the mainframe!) using the host: option in the shell: command.

How do I use my own containers

If the container is not available on the Clarive server, the Clarive rulebook downloads the container from Docker Hub.

So, to use your own containers, you have two options:

  • Upload them to Docker Hub. Then use them from your rulebook. Clarive will download it on the first run.

  • Install it on your Clarive server. On the first run Clarive will build another version of your container based on Clarive’s default Dockerfile, called clarive/your container. You don’t need to prefix clarive/ into the name, that’s done for you automatically.

Docker containers in your pipeline

Manage all active Docker containers in your pipeline from within Clarive


Getting started today

Using containers is an important step in implementing a continuous delivery and continuous deployment process that is streamlined and avoids environment clutter.

Head over to our 30-day trial and let Clarive to run your DevOps automation in docker containers for better consistency and easy setup of your temporary environments.


Learn more about Clarive Docker admin interface with this blog post and learn how to manage containers and docker images.


Running pipeline jobs to critical environments often requires a scheduled execution to take place. Clarive scheduled jobs run always in 3 main steps, called “Pre”, “Run” and “Post”.

Why run a pipeline in phases?

Most of the time the entire deployment job should not run all of its tasks at the scheduled time but as soon as the job is scheduled.

There are several phases or stages to every scheduled deployment, most of which can run as early as it’s scheduled. Tasks such as building, packaging, testing and even provisioning infrastructure can take place earlier if they do not impact on the productive environments.

When defining a pipeline, always think of what can be detected in earlier phases so that the evening deployment run will not fail on something that could have been checked previously.

Separating your pipeline into diferent stages

Separating your pipeline into different stages

 

Pipeline phases

Here are the different pipeline deployment phases you can run in Clarive.

Deployment preparation (PRE)

The deployment pipeline will take care of:

  • Creating the temporary deployment directory

  • Identifying the files changed to be deployed (BOM)

  • Checking out the files related to the changesets to be deployed from either code repositories (code) or directly attached to the changesets (SQL).

  • Renaming environment specific files (i.e web{PROD}.properties will be used just for deployments to the PROD environment)

  • Replacing variables inside the files (i.e. it will replace ${variable} with the real value of the variable configured for the project affected for the target environment)

  • Nature detection Clarive will identify the technologies to deploy analyzing the BOM

Common deployment operations

Common deployment operations

 

Deployment operations (RUN)

Actual deployment operations will be executed in this phase of the Clarive job (movement of binaries, restart of servers, check the proper installation, etc.)

Deployment operations

Deployment operations

 

Post-deployment operations (POST)

After the deployment is finished Clarive will take care of any task needed depending on the final status.

Some of the typical operations performed then are:

  • Notify users

  • Transition changesets states

  • Update repositories (i.e. tag Git repositories with an environment tag)

  • Synchronize external systems

  • Cleanup temporary directories locally and remotely

  • Synchronize related topics statuses (defects, releases, etc.)
     

Post deployment operations

Post deployment operations

 

Recap

Regardless of if you use Clarive or not, when defining your deployment pipelines always think on these three stages or phases.

  • PRE – everything that can run as soon as it has been scheduled
  • RUN – crucial changes that run at the scheduled time
  • POST – post activities or error recovery procedures

Happy DevOps!


Try Clarive now and start with your DevOps journey to continuous delivery.


Elixir logo

The DevOps movement in general, tends to exclude any technologies that are outliers to the do-it-yourself spirit of DevOps. This is due to the nature of how certain technologies are closed to developer-driven improvements, or roles are irreversibly inaccessible to outsiders.

That’s not the case in the mainframe. The mainframe is armed with countless development tools and programmable resources that rarely failed to enable Dev to Ops processes.

Then why DevOps practices have not prospered in the mainframe?

  • Ops are already masters of any productive or pre-productive environments – so changing the way developer teams interact with those environments require more politics than technology and are vetted by security practices already in place.
  • New tools don’t target the mainframe – the market and open source communities have focused first on servicing Linux, Windows, mobile and cloud environments.
  • Resistance to change – even if there were new tools and devs could improve processes themselves, management feels that trying out new approaches, especially those that go “outside the box”, could end up putting these environments, and mission-critical releases at risk.

Organizations want to profit from DevOps initiatives that are improving the speed and quality of application delivery in the enterprise at a vertiginous pace. But how can they leverage processes that are already in place with the faster and combined pipelines setup in the open side of the house?

Enter Clarive for z/OS

Our clients have been introducing DevOps practices to the mainframe for many years now. This has been made possible thanks to the well-known benefits of accepting and promoting the bimodal enterprise.

There are two approaches that can be used simultaneously in accomplishing:

  • Orchestrate mainframe tools and processes already in place – driving and being driven by the organization’s delivery pipeline
  • Launch modernization initiatives that change the way Dev and Ops deliver changes in the mainframe

Business Benefits bringing DevOps to the Mainframe

The benefit is simple. Code that runs in the mainframe is expensive and obscure. By unearthing practices and activities, organizations gain valuable insight that can help transform the z/OS-dependent footprint into a more contained and flexible part of the pipeline with these key benefits:

Coordinate and Speed-up Application Delivery

Mainframe systems don’t run in isolation. The data it manages and the logic it implements are shared as a single entity throughout the enterprise by applications in the open, cloud and even mobile part of the organization. Making changes that disrupt different parts of this delicate but business-critical organism needs to be coordinated at many phases, from testing to UATs to production delivery. Delivering change as a single transactional pipeline has to be a coordinated effort both forward and backwards.

End-to-End Visibility

DevOps practices perceive the mainframe as a closed box that does not play well with activities that target better visibility and end-to-end transparency. Having dashboards and reports that can work as input and output from the mainframe release processes into other pipelines will help deliver change.

Run a Leaner Operation and Avoid Waste

Creating mainframe processes that are part of the bigger picture help determine where constraints may lay and what parts of the pipeline may be deemed obsolete or become bottlenecks.

Lower Release Costs

Mainframe tools are expensive and difficult to manage. MIPS and processing in the mainframe may be capped and new processes could create unwanted expenses. Relying more on tools that drive the mainframe from Linux may in return translate into significant per release cost savings, encouraging a more continuous release process.

Use Cases

The following is a list of the most relevant benefits of Clarive z/OS and popular use cases that our clients have implemented using the Clarive z/OS platform and tools:

  • Compile and link programs using JCL preprocessed templates. Deploy DB2 items directly to the database.
  • Compile related COBOL programs when Copybooks change
  • Total control what is deployed to each environment at a given time
  • Schedule jobs according to individualized release and availability calendars and windows
  • Request approval for critical deployment windows or sensitive applications or items
  • Keep the lifecycle in sync with external project and issue management applications
  • Run SQA on the changes promoted. Block deployment if a minimum score has not been reached
  • Reliably rollback changes in Production, replacing previous PDS libraries with the correct ones
  • Provision CICS resources on request by users

Stay tunned for more of these DevOps for mainframe blog series!


Try Clarive now and start bringing DevOps to the mainframe now.


This release contains a lot of minor fixes and improvements from 7.0.12. It is also focus on refactoring interface improving the kanban boards.

Git repositories navigation on a tab

In Clarive 7.0.13 you can find a new Git repository navigation panel completely refactored. You can view sources, navigate branches and tags, compare references and much more.

To access the new interface, just navigate to the project in the left panel, expand it and click on the repository node.

Repository Navigation

Load default data by profile

Now any Clarive profile (a profile is a predefined set of topic categories, rules and roles that can be loaded in Clarive) can include default data as part of it.

ClariveSE profile now includes a sample-html project and two releases with several changes on them. It also automates the launch of 3 deployment jobs to INTE, TEST, and PROD.

To get the profile and the default sample data installed, execute cla setup <profile> and answer yes to the question Load default data?. Once you start the Clarive server it will automatically load the profile and the default data.

Notes

Kanban Board improvements

Custom card layout

You can now configure the layout of the cards of your Kanban Boards to show the information that you really want to focus on. To configure the layout, go to the board Configuration and select Cards Layout.

Cards Layout

Auto refresh

In the Quick View options panel (click on View button), now you’ll find a switch to toggle the Auto Refresh for this board. It will be updated with changes in the topics shown whenever the board tab is activated.

Auto refresh

Save quick view by user

In Clarive 7.0.13 the options selected in the quick view menu will be saved locally in your browser storage so every time you open the board it will use the last swimlanes, autorefresh, cards per list, etc. configuration you used.

Predefined statuses by list

Whenever you create a new board, it will be created with three default lists and now it will assign default statuses to these three lists with the following rules:

  • New: Initial statuses
  • In Progress: Normal statuses
  • Done: Final and Cancelled statuses

Killtree when job is cancelled

One of the most important improvements of Clarive 7.0.13 is the ability to kill/cancel the remote processes being executed by a job when this is canceled from the interface.

Auto Important

You can read about this new feature in this blog post

Improvements and issues resolved

  • [ENH] Git repositories navigation on a tab
  • [ENH] Clax libuv adaptation
  • [ENH] NPM registry directory new structure
  • [ENH] Add rulebook documentation to service.artifacts.publish
  • [ENH] Return artifact url on publish
  • [ENH] Invite users to Clarive
  • [ENH] Load default data by profile
  • [ENH] Users can choose shell runner for rulebooks
  • [ENH] Kill job signal configured in yml file
  • [ENH] Add default workers configuration to clarive.yml file
  • [ENH] Boards shared with “ALL” users
  • [ENH] Kanban custom card fields
  • [ENH] Killtree when job is cancelled
  • [ENH] Kanabn boards auto refresh
  • [ENH] Make sure to save kanban quick view session
  • [ENH] Filter data according to filter field in Topic Selector fieldlet
  • [ENH] Make sure new created boards have default lists
  • [ENH] Add date fields to card layout configuration
  • [FIX] Check user permissions in service.topic.remove_file
  • [FIX] Make sure user with permissions can access to rule designer
  • [FIX] Make sure CI permissions are working correctly
  • [FIX] Make sure that the ci grid is updated after the ci is modified
  • [FIX] Control exception when running scripts.
  • [FIX] Change project_security structure on user ci
  • [FIX] User without project field permissions can edit the topic
  • [FIX] Make sure React apps work in IE 11
  • [FIX] Show cis in create menu (standard edition)
  • [FIX] Administrator should be able to delete artifacts in ClariveSE
  • [FIX] When publishing NPM packages with scopes tarball is empty
  • [FIX] Make sure default values from variables are used when adding them
  • [FIX] Make sure notifications are sent only to active users
  • [FIX] Make sure to show username in “Blame by time” option for rules versions
  • [FIX] Remove default values when changing type of variable resource
  • [FIX] Allow single mode in variables resources
  • [FIX] Escape “/” in URLs for NPM scoped packages from remote repositories
  • [FIX] Avoid console message when opening a variable resource with cis set as default values
  • [FIX] Regexp for scoped packages should filter ONLY packages, not tgzs
  • [FIX] Refresh resources from url
  • [FIX] Create resource from versioned tab
  • [FIX] Make sure remote script element always display a final message
  • [FIX] Save variable when deleted default value field in a variable resource
  • [FIX] Make sure topic’s hidden fields are available as topicfields bounds
  • [FIX] Save resource when it does not have to validate fields
  • [FIX] Make sure projects can be added as kanban swimlanes
  • [FIX] Make sure changeset with artifact revision attached can be opened
  • [FIX] Make sure narrow menu repository navigation show changes related to branch
  • [FIX] Formating event data if fail service used
  • [FIX] Make sure that the chosen element is always selected in the rule tree.
  • [FIX] Reload data resource when refreshing
  • [FIX] Job distribution and las jobs dashlets should filter assigned projects to user
  • [FIX] Make sure user combo not have avaible grid mode in topic.
  • [FIX] Make sure that system user are showed in combo users
  • [FIX] Display column data in edition mode for a Topic Selector fieldlet in a topic
  • [FIX] Filter projects in grids by user security
  • [FIX] Make sure in topic selector combo all height size are available
  • [FIX] Ship remote file: show log in several lines
  • [FIX] Skip job dir removal in rollback
  • [FIX] Remove FilesysRepo Resource
  • [FIX] Remove permissions option from user menu
  • [FIX] Make sure when maximized description and choose back in the browser screen layout are showed well
  • [FIX] Remove session when user get deactivated
  • [FIX] Resources concurrency
  • [FIX] Validate CI Multiple option just with type ci variables
  • [FIX] Resource not saved when validation fails
  • [FIX] Make sure that the combos search has an optimal performance.
  • [FIX] Make sure ldap authentication returned messages are available in stash
  • [FIX] Show date and time in fieldlet datetime
  • [FIX] User session should not be removed on REPL open
  • [FIX] User with action.admin.users should be able to edit users
  • [FIX] Make username available in dashboard rules execution
  • [FIX] Make sure collapsing lists saved in user session correctly

Ready to upgrade?

Just follow the standard procedure for installing the new version. Click here to get it from our Install page.

Acknowledgments

Join us in our Community to make suggestions and report bugs.

Thanks to everyone who participated there.


Try Clarive now and start improving your DevOps practices.


Clarive 7.0.13 introduces a new feature that allows remote jobs to be killed when a pipeline job is cancelled.

Normally, pipeline job cancelation will only end processes local to the Clarive server and keep remote processes running. This was working as designed, as we did not intend to nuke remote processes inadvertently.

This is an interesting subject that we think could be of use within or outside the scope of Clarive, and may be useful if you’re wondering how to interrupt job pipelines while they’re running, or killing scripts running remote processes.

killing a remote process tree

Why remote processes

Pipeline job remote execution starts remote processes using one of our 3 communication agents/transports: SSH, ClaX (lightweight push agent) and ClaW (lightweight pull-worker). This article is specific about the SSH transport, as it’s more generic, but it applies also to ClaX and ClaW.

When a pipeline kicks off a remote job, Clarive connects to a remote server and starts the command requested. The connection between the Clarive server and the remote machine blocks (unless in parallel mode) and remains blocked for the duration of the remote command.

Here’s a rulebook pipeline example:

do:
   shell:
     host: [email protected]
     cmd: sleep 30

The above example will block wait 30 seconds for the remote sleep command to finish.

During the execution of the command, if we go to the remote machine and do a ps -ef, this is what we’d find:

 user  12042 12012  0 07:47 ?        00:00:00 sshd: [email protected]
 user  12043 12042  0 07:47 ?        00:00:00 sleep 30

Most remote execution engines do not track and kill remote processes. The issue of killing the remote processes and giving the user feedback process (or UI) is present in DevOps tools from Ansible to Gitlab to many others.

 https://gitlab.com/gitlab-org/gitlab-ce/issues/18909
killing a remote process tree

Currently killing a job will not stop remote processes

Killing the remote parent process

Before this release, canceling a job would end the local process and

But you can do the same from the Clarive server with the SSH client command ssh:

 [email protected] $ ssh [email protected] sleep 30
 Killed: 9

Now if we killed the server process – with Clarive’s job cancel command or with a simple Ctrl-C or even a kill -9 [pid] through the SSH client:

 [email protected] $ ssh [email protected] sleep 30
 Killed: 9

That typically does not work, as the children processes will remain alive and become children of the init process process id 1. This would be the result on the remote server after the local process is killed or the Clarive job canceled:

 user  12043     1  0 07:47 ?        00:00:00 sleep 30

The sshd server process that was overseeing the execution of the remote command terminates. That’s because the socket connection has been interrupted. But the remote command is still running.

Pseudo-TTY

A way to interrupt the remote command could be the use of the ssh -t option. The -t tells the SSH client to create a pseudo-TTY, which basically means tells SSH to make the local terminal a mirror of what a remote terminal would be, instead of just running a command.

If have never used it, give it a try:

$ ssh -t [email protected] vim /tmp/

It will open vim locally as if you had a terminal open on the remote machine.

Now if you try to kill a process started with -t using Ctrl-C, the remote sshd process will terminate the children process as well, just like when you hit Ctrl-C with a local process.

$ ssh -t [email protected] sleep 30
^C
Connection to remserver closed.

No remote processes remain alive after the kill, and sleep 30 disappears on remserver.

However, this technique does not solve our problem, due to the fact that pipeline jobs are not interactive, so we cannot tell the ssh channel to send a remote kill just by setting up a pseudo-tty. The kill signal will only impact locally and on the remote sshd and not be interpreted as a user manually hitting the Ctrl-C key.

The solution: tracking and pkill

The way to correctly stop remote processes when pipeline jobs are cancelled is to do it in a controlled fashion:

1) Clarive job process starts remote command and keeps the connection open

2) Clarive job is canceled (by the user normally, through the job monitor)

3) Clarive creates a new connection to all servers where commands are being executed

4) A pkill -[signal] -p $PPID command is sent through the same sshd tunnel

5) The pkill will kill the parent remote sshd process and all it’s children, also called the process tree

That way all the remote processes are stopped with the job cancel.

killing a remote process tree

Successfully killing remote processes will kill the full remote tree

Picking a signal

Additionally, we’ve introduced control over the local and remote signals to send to end the processes. You may be interested in sending a more stern kill -9 or just a nice kill -15 to the remote process.

Clarive will not wait for the remote process to finish since, as we have witnessed many times, certain shutdown procedures may take forever to finish, but it does have a timeout on the local job process that are running and who may be waiting for the remote process to finish.

The following config/[yourconfig].yml file options are available:

# kill signal used to cancel job processes
# - 9 if you want the process to stop immediately
# - 2 or 15 if you want the process to stop normally
kill_signal: 15

# 1|0 - if you want to kill the job children processes as well
kill_job_children: 1
# signal that will be sent to remote children
kill_children_signal: 15

# seconds to wait for killed job child processes to be ripped
kill_job_children_timeout: 30

Why killing remote processes is important

When we get down to business, DevOps is as much about running processes on remote servers, cloud infrastructure and containers as it is about creating a culture that promotes a do-IT-yourself empowered culture.

If you are building DevOps pipelines and general remote process execution and want to stop it midway through for whatever reason, it’s important to have a resilient process tree that is tracked and can be killed when requested by the master process.

Happy scripting!


Get an early start and try Clarive now. Install your 30-day trial here.


Elixir logo

As you can see in our previous posts, having a complete pipeline to introduce DevOps in your day-to-day live is easy with Clarive’s Rulebook.

You just have to follow these three simple steps:
1. Get your 30-day trial
2. Upload your code to your Clarive Project repository.
3. Prepare your rulebook, push your commit and enjoy! (oops, maybe four steps would have been better :))

So let’s get down to business: we will detail the needed code.

Defining our variables

First, we declare the variables that will be used throughout our pipeline process.

vars:
  - workspace: "${project}/${repository}"
  - server:  https://<my-clarive>.clarive.io
  - art_path: ${server}/artifacts/repo/${project}

Building our application

In this step, we choose Elixir Docker image using Mix as a building tool.

build:
  do:
    - image:
         name: 'elixir'
         runner: 'bash'

     - shell [Compile application]: |
         cd {{ workspace }}
         mix compile
         tar cvf ${project}_${job}.tar _build/dev/lib/

And publish the compiled application to our artifact repository.

    - artifact_repo = publish:
        repository: Public
        to: '${art_path}'
        from: '{{ workspace }}/${project}_${job}.tar'
    - log:
        level: info
        msg: Application build finished

Ready to test

As long as we have our own application tests, this step is as simple as running the right command.

test:
  do:
    - image:
         name: 'elixir'
         runner: 'bash'

     - shell: |
         cd {{ workspace }}
         mix test

Deploy wherever we want

Now, it’s time to choose where our app will run. For example, send the tar file to another server and run the app.

deploy:
  do:
   - ship:
       from: '${art_path}/${project}_${job}.tar'
       to: /tmp/remotepath/
       host: ${remote_server}
  - shell:
      cd /tmp/remotepath/
      tar -xvf ${project}_${job}.tar
      mix run

This remote_server could be an AWS instance in PROD environment or another Docker container just to QA.

Happy Ending

Now with our .yml file already prepared, we can use Clarive’s interface to visualize the steps that will follow the finalizing of the rulebook.
To start the deployment, we only need to perform the push on the repository, just as we have seen before in other posts. When performing the push, Clarive automatically creates a deployment (Continuous Integration) and executes all the code found in the .clarive.yml.


Visit Clarive documentation to learn more about the features of this tool.



Increase quality and security of node.js applications which rely on NPM packages.


Developers often create small building blocks of code that solve one particular problem and then “package” this code into a local library following NPM guidelines. A typical application, such as a website, often consists of dozens or hundreds such small node.js packages. Development teams often use these packages to compose larger custom solutions.

While NPM allow teams to exploit the expertise of people who have focused on a particular problem area, residing either inside or outside the local organization, and support teams work together better, sharing talent across projects, we often see companies struggling with the quality of packages being used and as a result looking for ways to control usage better.

Better ways to manage and control what packages are being deployed in their cloud and/or their data centers is vital.

Organizations want to reduce the risk of failure or instability resulting from downloads of the latest version of a required NPM package from the internet, potentially improperly tested.

This video shows how Clarive can help you, making your node.js applications that use NPM packages more secure and stable.


Start creating NPM packages with Clarive. Get your 30-day trial now.