Elixir logo

As you can see in our previous posts, having a complete pipeline to introduce DevOps in your day-to-day live is easy with Clarive’s Rulebook.

You just have to follow these three simple steps:
1. Get your free Clarive cloud instance
2. Upload your code to your Clarive Project repository.
3. Prepare your rulebook, push your commit and enjoy! (oops, maybe four steps would have been better :))

So let’s get down to business: we will detail the needed code.

Defining our variables

First, we declare the variables that will be used throughout our pipeline process.

  - workspace: "${project}/${repository}"
  - server:  https://<my-clarive>.clarive.io
  - art_path: ${server}/artifacts/repo/${project}

Building our application

In this step, we choose Elixir Docker image using Mix as a building tool.

    - image:
         name: 'elixir'
         runner: 'bash'

     - shell [Compile application]: |
         cd {{ workspace }}
         mix compile
         tar cvf ${project}_${job}.tar _build/dev/lib/

And publish the compiled application to our artifact repository.

    - artifact_repo = publish:
        repository: Public
        to: '${art_path}'
        from: '{{ workspace }}/${project}_${job}.tar'
    - log:
        level: info
        msg: Application build finished

Ready to test

As long as we have our own application tests, this step is as simple as running the right command.

    - image:
         name: 'elixir'
         runner: 'bash'

     - shell: |
         cd {{ workspace }}
         mix test

Deploy wherever we want

Now, it’s time to choose where our app will run. For example, send the tar file to another server and run the app.

   - ship:
       from: '${art_path}/${project}_${job}.tar'
       to: /tmp/remotepath/
       host: ${remote_server}
  - shell:
      cd /tmp/remotepath/
      tar -xvf ${project}_${job}.tar
      mix run

This remote_server could be an AWS instance in PROD environment or another Docker container just to QA.

Happy Ending

Now with our .yml file already prepared, we can use Clarive’s interface to visualize the steps that will follow the finalizing of the rulebook.
To start the deployment, we only need to perform the push on the repository, just as we have seen before in other posts. When performing the push, Clarive automatically creates a deployment (Continuous Integration) and executes all the code found in the .clarive.yml.

Visit Clarive documentation to learn more about the features of this tool.

Continuous integration (CI) is now becoming a standard in all software projects. With this, a pipeline is executed for each new feature or change in code that is made, and this allows you to carry out a series of steps to ensure that nothing has been “broken”.

One of the main steps that needs to be undertaken in the CI is the implementation of tests, both unit tests and regression tests. The latter can be done in the nightly build that Clarive is responsible for launching on a daily basis, at the time that we pre-define within our Releases.

Unit tests must be run whenever there has been any change in the code. This brings us to our first step. The unit tests must be run with each change in source code using a transfer (Continuous Integration) and the tests will be launched in the nightly builds of each new Release. This will ensure that the new version of our product will go onto the market without any bugs or problems in the features that worked well in the current versions.

At the end of this post the reader should know how to:
Integrate tests into a Clarive rulebook.
Publish files in an artifact repository.
– Send notifications to users.

Ready to start

To start this step-by-step guide you need a copy of Clarive installed on your server (you can request a free one here).


For this post, we will use the following workflow:

1- User commit tests.
2- Clarive runs rulebook.
3- Mocha run tests.
4- Clarive post a message in Slack with the report.

For the first part of the development, we will assume that the developer has already written his tests. In this example we will use a simple “Hello world” and some example tests that you can find here and which will use the library expect.js, which will be installed during the running of the rulebook.

Workspace files:

In this example within our git repository we have the following file structure:

├── .clarive.yml
├── package.json
├── src
│   ├── add.js
│   ├── cli.js
│   ├── divide.js
│   ├── helloWorld.js
│   ├── multiply.js
│   ├── sort.js
│   └── subtract.js
└── test
    ├── helloWorld.tests.js
    ├── test-cli.tests.js
    └── test.tests.js

.clarive.yml: The rulebook file that Clarive runs
package.json: NPM configuration file. Within this file we have defined a command to run the tests that are found in the test folder:

"scripts": {
    "test": "mocha --reporter mochawesome --reporter-options reportDir=/clarive/,reportFilename=report,reportPageTitle=Results,inline=true"

src: Folder where we find the source code.
tests: Folder where the test files are located.

Writing a rulebook

The rulebook will be responsible for running the tests and generating a report to notify the user of the results of the tests.

First phase: Running the tests

As we can see in the file skeleton clarive.yml, there is a step named TEST that we will use to run the tests:

    - log:
        level: info
        msg: The app will be tested here

But first, let’s define some global variables:

  - workspace: "${project}/${repository}"
  - server:  https://<my-clarive>.clarive.io
  - art_path: ${server}/artifacts/repo/test-reports

We can see from the code that we are going to use node.js with mocha, chai and expectjs as libraries to run the tests. And thanks to Docker images, we can use a container with mocha and chai already installed, and this means that you only need to install expectjs using the rulebook.

We can now specify the Docker image we’re going to use during this TEST phase and then install the expectjs library from our working directory and, finally, update our package.json:

  image: mocha-chai-report
    - shell:
        cmd: |
             cd ${workspace}
             npm install expect.js
             npm install

Next, we just need to run the tests. As we have already defined the command to run the tests in our json package, we only need to run npm test:

  image: mocha-chai-report
    - shell:
        cmd: |
             cd ${workspace}
             npm install expect.js
             npm install
             npm test

Second phase: Publish report

When you finish running the tests, an HTML report will be generated with all the results, thanks to the mochawesome library installed in the Docker image. We will now publish this report in our artifact repository:

- publish_report = publish:
        repository: Test reports
        from: 'report.html'
        to: "test-reports/{{ ctx.job('name') }}/"
    - log:
        level: info
        msg: "Report generated successfully!"

Now, we are now going to complete the POST step. During this step we will inform the users that we want to complete the deployment.

Third phase: Notifications

We need to post the message in Slack, with a link to the report. To do this, we will have to have our WebHook configured in Slack and in Clarive:

Now all that remains is to add the sending of the message to the POST step:
From the Slack API any user can configure their own messages. In this case we have configured a message in which the user, on receiving the message, will be able to access the report from the same channel:

- slack_post:
          webhook: SlackIncomingWebhook-1
          text: ":loudspeaker: Job *{{ ctx.job('name') }}* finished."
          payload: {
              "attachments": [
                "text": "Choose an action to do",
                "color": "#3AA3E3",
                "attachment_type": "default",
                "actions": [
                        "name": "report",
                        "text": "View test report :bar_chart:",
                        "type": "button",
                        "style": "primary",
                        "value": "report",
                        "url": "${art_path}/test-reports/{{ ctx.job('name') }}/report.html"
                        "name": "open",
                        "text": "Open Monitor in Clarive",
                        "type": "button",
                        "value": "open",
                        "url": "${server}/r/job/monitor"

Lets make continuous integration!

Using Clarive, we can create a Story where we will attach all the code. If we haven’t already created a Project, we can create one using the Admin panel:

Then we create a repository of artifacts (a public one in this case):

Once we have carried out the previous steps we will create the Story in Clarive:

We’ll now change the state to “In Progress” and Clarive will automatically create a branch in the repository:

From the console we’ll clone the repository and change to the branch we’ve just created:

We’ll now place all our files in this branch and push them to the repository:

Clarive will automatically generate the deployment:

After finished, we will received the notification on the Slack channel that we’ve selected:

Finally, we can check test report in the artifact repository.

Next Steps:

In Clarive – After making this first contact with Clarive, our Story can be assigned to a Release (one that was previously created from the release menu) and, after changing the state to “Ready”, the developed feature will be automatically merged into the branch of therelease.

In Rulebook – Well, this is a simple example. The rulebook can be as complex as the user wants it to be, for example, by adding if statements, adding elements in the different deployments. For example, if something has gone wrong, create a new Issue in Clarive automatically, only run tests if the “tests/” folder has been modified in the commit…the possibilities are limitless!.

Visit Clarive documentation to learn more about the features of this tool.


In this video we will create a webhook in our rulebook that provisions a VM in our Azure instance.

Following the instructions given in the previous blog “DevOps Webservices: a Clarive primer”, we show how easy is everything in Clarive:

Get an early start and try Clarive now. Get your custom cloud instance for free.

We’re pleased to present our new release Clarive 7.0.12. This release contains a variety of minor fixes and improvements from 7.0.11. It is focused on refactoring interface.

NPM Artifact Repository management

Clarive team is proud to release this version with artifact repository enhancement. This new functionality allows NPM packages management.

  • Now is possible to surf the NPM repository folders through the artifacts interface, content visualization and distinguishing the new packages that have been included in the repository.

Create artifacts tags in order to sort them out

  • Use Clarive NPM repositories that serve as proxy to the global NPM store in npmjs.org or just use them as local, so you can control which public packages are available for your developers
npm install angularjs --registry http(s):///artifacts/repo/
  • Use Clarive Groups of repositories to categorize packages and access several local repositories with just one registry
npm install angularjs --registry http(s)://<clarive_url>/artifacts/repo/<npm_repo_group>
  • Directly publish in Clarive NPM repositories with npm publish command
npm publish ./ --registry http(s):///artifacts/repo/
  • You can also publish packages through rulebooks
- publish:
repository: '' # repository name
from: ''
to: ''

Take a look to our docs website and learn how to configure your artifact repository in Clarive

NPM repository events exists in Clarive. So, for example, when the *npm publish* command is executed in one repository, the artifact will be published in Clarive, sending a notification email to your team. For more information go to our documentation and learn all you can do with events.

Improvements and issues resolved

  • [ENH] – Project menu revamp
  • [ENH] – Plugins code structure and formating
  • [ENH] – Owner can cancel and restart jobs
  • [ENH] – Interface plugins standarization
  • [FIX] – Docker images cache management
  • [FIX] – Show subtask editable grid only during edition
  • [FIX] – Differentiate environments and variables in menu

Ready to upgrade?

Just follow the standard procedure for installing the new version. Click here to get it from our Install page.


Join us in our Community to make suggestions and report bugs.

Thanks to everyone who participated there.

Get an early start and try Clarive now. Get your custom cloud instance for free.

This video shows how .net applications can be deployed with Clarive

For this deployment a SINGLE pipeline is used to deploy to the DEV and QA environment.
Assuring a consistent way of deploying.

In a next video this changeset will be related to a Release and deployed together with a mainframe application change and a Java application change into production. Again with the SAME pipeline.

Get an early start and try Clarive now. Get your custom cloud instance for free.

To conclude this blog series, let me share some criteria to evaluate different automation solution in the application delivery context. These criteria can help you in the selection process for a good delivery automation solution. 

Logic Layering
How is the automation logic layered out?
Are the flow components tightly or loosely coupled?

Runs Backwards
If rolling back changes are needed, is a reverse flow natural or awkward?
Reusable Components
Can components and parts of the logic be easily reused or plug-and-played from one process to the next?

Entry Barrier
How hard is it to translate the real world into the underlying technology?
Easy to Implement
How hard is it to adapt to new applications and processes? What about maintenance?

Environment and Logic Separation
How independent is the logic from the environment?
Model Transition
Can it handle the evolution from one model to the other?

Massive Parallel Execution
Does the paradigm allow for splitting the automated execution into correlated parts that can run in parallel and results be joined later?
Generates Model as a Result
Does the automation know what is being changed and store the result configuration back into the database?

Handles Model Transitions
Can the system assist in evolving from one environment configuration to another?
Testable and Provable
Can the automation be validated, measured and tested using a dry-run environment and be proven correct?

Criteria Process-Driven Model-Driven Rule-Driven
Logic Layering Flowchart Model, Flowchart Decision Trees
Coupling Tight Loose Decoupled
Easy to Debug    
Runs Backwards (Rollback mode)    
Understands the underlying environment  
Understands component dependencies  
Reusable Components  
Entry Barrier Medium High Low
Easy to Migrate ✪✪✪ ✪✪✪✪
Easy to Maintain ✪✪✪ ✪✪✪✪
Environment and Logic separation  
Requires Environment Blueprints    
Handles Model Transitions    
Massive Parallel Execution (parallel by branching only) (limited by model components)
Performance ✪✪✪ ✪✪✪✪✪

Final notes

When automating complex application delivery processes, large organizations need to choose a system that is both powerful and maintainable. Once complexity is introduced, ARA systems often become cumbersome to maintain, slow to evolve and practically impossible to migrate out.

Process enterprise systems excel at automating business processes (as in BPM tools), because they do not inherently understand the underlying environment. But in application delivery and release automation in general, understanding the environment is key for component reuse and dependency management. Processs are difficult to adapt and break frequently.

Model-driven systems have a higher implementation ramp-up time since they require blueprinting of the environment before starting. Blueprinting the environment means also duplicating container metadata and other configuration management and software-defined infrastructure tools. The actions executed in model-based systems are not transparent, tend to be fragmented and require outside scripting. Finally, many release automation steps simply cannot be modeled that easy.

Rule-driven systems have a low entry barrier and are simple to maintain and extend. Automation steps are decoupled and consistent, testable and reusable. Rules can run massively in parallel, scaling well to demanding delivery pipelines. The rule-action logic is also the basis of machine-learning and many of the AI practices permeating IT nowadays.

In short, here are the key takeaways when deciding what would be the best approach to automating the delivery of application and service changes:


✓ Easy to introduce
✓ Easy to model
✓ Simple to get started

✓ Hard to change
✓ Complex to orchestrate
✓ Highly reusable

✓ Not environment-aware
✓ High entry barrier
✓ Decoupled, easy to change and replace

✓ Error prone
✓ Duplication of blueprints
✓ Massively scalable

✓ Complex to navigate and grasp
✓ Leads to fragmented logic and scripting
✓ Models the environment as a result

✓ Not everything can or needs to be modeled
✓ Fits many use cases

Rule-driven automation is therefore highly recommended for implementing application and service delivery, environment provisioning and orchestration of tools and processes in continuous delivery pipelines. In fact, a whole new generation of tools in many domains now relies on rule-driven automation, such as:
– Run-book automation
– Auto-remediation
– Incident management
– Data-driven marketing automation
– Cloud orchestration
– Manufacturing automation and IoT orchestration
– And many more…

Release management encompasses a complex set of steps, activities, integrations and conditionals. So which paradigm should drive release management? Processs can become potentially unmanageable and detached from the environment. Models are too tied to the environment and end up requiring scripting to be able to deliver changes in the correct order.

Only rule-driven systems can deliver quick wins that perform to scale and are easy to adapt to fast-changing environments.

Get an early start and try Clarive now. Get your custom cloud instance for free.

It is remarkable how much ITIL bashing I have heard and read about since its 2011 revision was released a few years ago. 

Transforming into the digital world and with practices such as DevOps, Continuous Delivery, and Value stream mapping, many question if ITIL is still relevant today?

Of course it is!! Let me try to explain this in some detail and share my top 3 reasons why ITIL will also remain relevant in 2018 (and likely beyond as well)

Reality in the digital age is the ever-increasing customer expectation that digital and mobile services do what they need, but also that they will always be there, wherever and whenever they are needed. This impacts Dev as well as Ops.

As a result, companies are searching for and creating new innovative services for consumers, for industry and government. At the same time organizations are also continuously working on improving the structure and process for making sure that incidents, problems, service requests, and service changes are handled in the most efficient and effective way possible so that user experience and expectations are met continuously and fast. In the digital world expectation is to up 24/7.

Let’s explore this a step deeper.

IT is required and desires to deliver value to its internal or external customers (and wants to do this as fast as acceptable by them). Since ITIL v3, the value of an IT service has been defined as a combination of Utility and Warranty as the service progresses throughout its lifecycle.

Utility on the one hand is defined as the functionality offered by a product, application, or service to meet a particular need. Utility is often summarized as “what it does” or “its level of being fit for purpose”.

Warranty on the other hand provides a promise or guarantee that a product, application or service will meet its agreed requirements (“how it is done”, “its level of being fit for Use”). In digital-age wording ensuring digital and mobile services will always be there, wherever and whenever they are needed.

I read another interesting article a while ago that stated that Dev only produces 20% of the value that a service creates for its internal or external customers. That 20% is the actual functionality, or what the application does. This is the utility of the service, application, or product as explained above. The other 80% of the value of the service is created by Ops, ensures the service will be usable according to the customer’s needs, and will continue to be usable throughout its entire lifecycle. This is what ITIL calls warranty of the service.

Warranty includes availability, capacity, continuity and security of the service that must be implemented and maintained long after the deployment is finished and Dev moves on to their next project, or sprint.

So in the end, Ops has accountability for close to 80% of the actual value of the service for internal or external customers. That’s a lot!

Looking at DevOps, being a cultural and professional movement focusing on better communication, collaboration, and trust between Dev and Ops to ensure a balance between responsiveness to dynamic business requirements and stability, it looks more than natural that it is more Dev and must earn trust of Ops in this setting. If accountability is spread 80%-20%, then it is normal to me that the one that takes the highest risk, seeks the most trustworthy partner. Ops will seek stability and predictability to deliver the required warranty. To establish trust between Dev and Ops the handover between the two needs to be “trustworthy”. The way to establish this includes:

  • more transparency and accuracy in release and coding progress
  • more automation within the delivery process (the more manual activities in the delivery process, the lower the level of trust will be)
  • mutual understanding and respect of each other’s needs and expectations to be successful

Therefore, Lean IT and Value Stream Mapping, practices like Continuous Delivery and Continuous Deployment, all become a subset or a building block within a DevOps initiative.  DevOps is often an organic approach toward automating process/workflow and getting products to market more efficiently and with quality.

Often in bigger enterprises, applications or services tend to be highly interconnected. There is a desire to have a better decoupling and use of micro services, but for many this will take another decade or even longer to ultimately there (if at all). Dev teams often work and focus on individual applications or services, but in reality, these applications often interact with others within the production environment. Ops has the accountability to ensure services and applications remain available and functional at all times with quality.

This often means finding a workaround quickly at the front line so customers can continue working, assessing the overall impact of a change in production holistically, identifying failure root cause, etc. This all aligns nicely with what ITIL has been designed for: Best practices for managing, supporting, and delivering IT Services. There is no way the need for such practices will fade or become irrelevant in the near future, especially not in larger enterprises. On the contrary, with the introduction of new platforms (like public or private cloud, containers, IoT, or virtual machines, etc) we will see an increasing number of silos and teams, because often Dev team center around specific platforms.

Their deliverables form the micro services and applications of tomorrow, spread over multiple platforms. Ops need to ensure these services are of quality and delivering value to all customers. This requires discipline, communication, collaboration, tracking, planning and learning across all silos/teams…. ITIL still remain the best reference point for establishing such practices.

Big companies often with legacy in coding will only remain successful in the digital age if they find a good blend that fits for them between Agile, DevOps, ITIL and Lean IT. I am mentioning only these explicitly because they benefit a great momentum at present, but in fact, companies should explore best practices available and find the best blend that works effectively and efficiently for them, and ensure buy in from those affected.

This last aspect is key: teams need to build a common understanding of how DevOps is enabled by Agile, ITIL/ITSM, Lean and maybe other best practices.  It is not just about a tool, automation or continuous delivery but how we go about doing this that is key.  You need to promote, inspire and educate teams on how these practices can be used together to enable them and the company for success. 

To finish let me share my 3 reasons why ITIL remains valid into 2018:

1) ITIL remains providing a stable foundation/reference point in the evolving enterprise

Flexibility, elasticity and scalability remain key attributes of contemporary IT departments. Creating and maintaining this level of agility relies on having clear processes, a clear and accurate understanding of the current IT configuration and of course a good service design. The core principles of ITIL have been refined to help organizations establish these attributes within their technology systems, ensuring that there is a steady foundation for IT operations. Having this stable environment makes it easier to adjust the service management setup without running into any problems.

2) ITIL provides the required stability and value warranty within evolving enterprises

Businesses face more pressure than ever to maintain constant uptime around the clock, and all innovations in the world are useless if businesses are losing productivity because of system availability issues. ITIL continues to provide the reliability and stability needed to maximize the value of new technology strategies in today’s digital world. While organizations are in their digital transformation journey, they will have to support multi-speed, multi-risk, multi-platform environments and architectures. ITIL, under regular evolution and updating itself, continues to provide proven, common sense best practices to deliver stability in evolving, heterogeneous environments.

3) ITIL remains the de-facto reference set of best practices for IT service management (ITSM) that focuses on aligning IT services with the needs of customers

If you pick and choose, adopt and adapt what you find in ITIL you will learn that a lot of the content is “common sense”. Common sense will never go out of fashion.

Just be aware and accept that the need of and value to a customer goes beyond just the delivery of (isolated) functionality into a production environment.

Get an early start and try Clarive now. Get your custom cloud instance for free.

A third and final way to automate delivery I will discuss is rule-driven automation.

Rule-driven automation ties together event triggers and actions as the environment evolves from state A to state B when changes are introduced.

Rules understand what changes are being delivered when, where (the environment) and how.

Rules are easy

Rules are driven by events and behavior and are fully transparent. Rules are also behind the simplest and most effective tools employed by users of all levels, from the popular IFTTT to MS Outlook, for automating anything from simple tasks to complex process. Why? Because rules are both easy to implement and to understand.

Let’s use again the analogy of software development to make the rule-driven concept clear. It reminds me of my university time when I was working with rule-based systems. At that time, we made the distinction between procedural and logical knowledge. Let me recap and explain both quickly.

Knowledge is different

Procedural knowledge is knowledge about how to perform some task. Examples are how to provision an environment, how to build an application, how to process an order, how to search the Web, etc. Given their architectural design, computers have always been well-suited to store and execute procedures. As discussed before, most early-day programming languages make it easy to encode and execute procedural knowledge, as they have evolved naturally from their associated computational component (computer). Procedural knowledge appears in a computer as sequences of statements in programming languages.

Logical knowledge on the other hand is the knowledge of “relationships” between entities. It can relate a product and its components, symptoms and a diagnosis, or relationships between various tasks for example. This sounds familiar looking at application delivery and dependencies between components, relationships between applications, release dependencies etc.

Unlike for factual and procedural knowledge, there is no core architectural component within a traditional computer that is well suited to store and use such logical knowledge. Looking in more detail, there are many independent chunks of logical knowledge that are too complex to store easily into a database, and they often lack an implied order of execution. This makes this kind of knowledge ill-suited for straight programming. Logical knowledge seems difficult to encode and maintain using the conventional database and programming tools that have evolved from underlying computer architectures.

Rules as virtual environments

This is why rule-driven development, expert system shells, rule-based systems using rule engines became popular. Such a system was a kind of virtual environment within a computer that would infer new knowledge based on known factual data and IF-THEN rules, decision trees or other forms of logical knowledge that could be defined.

It is clear that building, provisioning, and deploying applications or deploying a release with release dependencies assume a tremendous amount of logical knowledge. This is exactly the reason why deployment is often seen as complex. We want to define a procedural script for something that has too many logical, non-procedural knowledge elements.

For this reason, I believe that rule-driven automation for release automation and deployment has such great potential.

In a rule-driven automation system, matching rules react to the state of the system. The model is a result of how the system reconfigures itself:

Rule-driven automation is based on decision trees that are very easy to grasp and model, because they:

  • Are Simple to understand and interpret.” People are able to understand event triggers and rules after a brief explanation. Rule decision trees can also be displayed graphically in a way that is easy for non-experts to interpret.
  • Require little data preparation.” A model-based approach requires normalization into a model. Behaviors however can be easily turned into a rule decision tree without much effort. IF a THEN b.
  • Support full Decoupling.” With the adoption of service-oriented architectures, automation must be decoupled so that it is easy to replace, adapt and scale.
  • Are Auto scalable, replaceable and reliable.” Decoupled logic can scale and are safer to replace and continuously improve and deploy.
  • Are Robust.” Resists failure even if its assumptions are somewhat violated by variations in the environment.
  • Perform well in large or complex environments.” A great amount of decisions can be executed using standard computing resources in reasonable time.
  • Mirror human decision making more closely than other approaches.” This is useful when modeling human decisions/behavior and makes it suitable for applying machine learning algorithms.

The main features of rule-driven automation include:

  • Rules model the world using basic control logic: IF this THEN that. For every rule there is an associated action. Actions can be looped and further broken down into conditions.
    Rules are loosely coupled and therefore can execute in parallel and en masse without the need to create orchestration logic.
  • Rules are templates and can be reused extensively.
  • Rules can be chained and concurrency controlled.
  • Rules handle complex delivery use cases including decision and transformation.
  • The model is a result of how rules interact with the environment. Models and blueprints can also be used as input, but are not a requirement.

Get an early start and try Clarive now. Get your custom cloud instance for free.