Elixir logo

As you can see in our previous posts, having a complete pipeline to introduce DevOps in your day-to-day live is easy with Clarive’s Rulebook.

You just have to follow these three simple steps:
1. Get your free Clarive cloud instance
2. Upload your code to your Clarive Project repository.
3. Prepare your rulebook, push your commit and enjoy! (oops, maybe four steps would have been better :))

So let’s get down to business: we will detail the needed code.

Defining our variables

First, we declare the variables that will be used throughout our pipeline process.

  - workspace: "${project}/${repository}"
  - server:  https://<my-clarive>.clarive.io
  - art_path: ${server}/artifacts/repo/${project}

Building our application

In this step, we choose Elixir Docker image using Mix as a building tool.

    - image:
         name: 'elixir'
         runner: 'bash'

     - shell [Compile application]: |
         cd {{ workspace }}
         mix compile
         tar cvf ${project}_${job}.tar _build/dev/lib/

And publish the compiled application to our artifact repository.

    - artifact_repo = publish:
        repository: Public
        to: '${art_path}'
        from: '{{ workspace }}/${project}_${job}.tar'
    - log:
        level: info
        msg: Application build finished

Ready to test

As long as we have our own application tests, this step is as simple as running the right command.

    - image:
         name: 'elixir'
         runner: 'bash'

     - shell: |
         cd {{ workspace }}
         mix test

Deploy wherever we want

Now, it’s time to choose where our app will run. For example, send the tar file to another server and run the app.

   - ship:
       from: '${art_path}/${project}_${job}.tar'
       to: /tmp/remotepath/
       host: ${remote_server}
  - shell:
      cd /tmp/remotepath/
      tar -xvf ${project}_${job}.tar
      mix run

This remote_server could be an AWS instance in PROD environment or another Docker container just to QA.

Happy Ending

Now with our .yml file already prepared, we can use Clarive’s interface to visualize the steps that will follow the finalizing of the rulebook.
To start the deployment, we only need to perform the push on the repository, just as we have seen before in other posts. When performing the push, Clarive automatically creates a deployment (Continuous Integration) and executes all the code found in the .clarive.yml.

Visit Clarive documentation to learn more about the features of this tool.

Increase quality and security of node.js applications which rely on NPM packages.

Developers often create small building blocks of code that solve one particular problem and then “package” this code into a local library following NPM guidelines. A typical application, such as a website, often consists of dozens or hundreds such small node.js packages. Development teams often use these packages to compose larger custom solutions.

While NPM allow teams to exploit the expertise of people who have focused on a particular problem area, residing either inside or outside the local organization, and support teams work together better, sharing talent across projects, we often see companies struggling with the quality of packages being used and as a result looking for ways to control usage better.

Better ways to manage and control what packages are being deployed in their cloud and/or their data centers is vital.

Organizations want to reduce the risk of failure or instability resulting from downloads of the latest version of a required NPM package from the internet, potentially improperly tested.

This video shows how Clarive can help you, making your node.js applications that use NPM packages more secure and stable.

Get an early start and try Clarive now. Get your custom cloud instance for free.

In this video we can see how Clarive checks the availability of certain versions of applications (BankingWeb 2.1.0 and ComplexApp 1.2.0) needed in an environment before deploying a version of another application (ClientApp 1.1.0).

We could consider the applications(BankingWeb and ComplexApp) as pre-requisites for the actual application (ClientApp) for which the deployment is requested.

When the required applications versions (BankingWeb 2.1.0 and ComplexApp 1.2.0) are not in the target environment yet, Clarive will block the deployment until the applications are either deployed first (in a separate job) or added to the same deployment job as the application requiring the pre-requisite applications.

Get an early start and try Clarive now. Get your custom cloud instance for free.

Given the fact that the EU’s GDPR law (replacing the 1995 Data Protection Directive) will go into effect on May 25, almost every software vendor is jumping on the bandwagon to explain how they can help.

Let me be compliant, and follow the stream.

gdpr and Clarive

When reading about the subject online, I noticed that most articles center around the specific business and legal obligations regarding personal data. These articles focus on physical data processing and the data controller obligations to manage processing. Of course!

This is not what I want to write about in this blog however. The GDPR is also expected to impact the software delivery life cycle and its related IT-development processes for organizations that plan to rollout IT projects within the EU.

There are many lifecycle flavors to deliver software today on the market, all waterfall, iterative, or agile based. All of them define the way to manage and control the IT project, from planning to rollout, across the different application layers or modules, and platforms. Common software layers that will be directly impacted by the new GDPR law include of course databases as well as their related architecture, but also data transport, data security, presentation, and application layers… basically potentially every software aspect could be affected!

The impact of GDPR on application delivery

This means if your company intends to continue to roll out systems in the EU, you will have to deal with the new functional and technical requirements introduced by the GDPR like the following (this is not an exhaustive list, only some important ones to make the point):

  • Ensure data protection in the system and the organization, by design and by default (Recital 78 and Article 25)
  • Use data encryption when possible (Recitals 83 and Articles 6-4(e), 32-1(a))
  • Use Data pseudonymization when possible (Recitals 26, 28, 29, 78 and Articles 6-4(e), 25-1, 32-1(a))
  • Anonymize data when possible (Recital 26)
  • Share processing attributes and steps to the data subject in an easy to understand form at the time of data collection, electronically or in writing (Recitals 39, 58 and Articles 12-1, 13-2(a-f))
  • Make data portable to another provider (maybe competitor) (Recital 68 and Articles 13-2(b), 14-2(c), 20)
  • Ensure data is secured, and integrity and confidentiality are maintained, using technical and organizational means under the management of the controller (Recital 49 and Articles 5-1(f), 32-1(b-d))

While a number of these new requirements might be seen as “no-brainer” as they were already part of your software design, others will trigger new requirements that need to be implemented fast and with quality before the law is enforced.

Failing seems to be not really an option. Not complying with the GDPR requirements could result in very serious penalties! As I could read, the worst-case scenario could a fine of €20 million or 4 percent of the company’s previous year’s total global revenue, whichever is greater. Ouch!

The clock is ticking, how do you track progress and ensure compliance?

With only a few more months left, how are you progressing with the delivery of these new requirements? Can you truly track requirement progress throughout your software delivery chain? How confident are you all policies are correctly implemented?

When speaking to bigger clients, this is often their biggest challenge: They have deployed multiple tools to support software delivery. Coding is fragmented, and the delivery toolchain is often poorly integrated, leading to extensive manual activities within the delivery process and a lack to end-to-end visibility and traceability.

At Clarive we believe in SIMPLICITY. Your software delivery toolchain should be as simple as possible, requiring the minimal set of tools to get the work done fast, with quality, and in a transparent way. For smaller organizations and startups, this can be a single tool: Clarive! Bigger organizations often do need multiple tools to support multiple platforms, but they miss overall orchestration and automation. Not those that use Clarive!

As a simple, lean application delivery platform, Clarive will deliver you the traceability you need to track progress on your GDPR (and other) requirements with ease.

Clarive not only helps with end-to-end tracking, its powerful role-based, ruling and workflow automation system also offers capabilities that will help you to ensure everyone on the team remains in compliance with the company’s legal and other requirements, like those for GDPR. For example:

  • Workflow rules: Workflow rules allows you to accept/reject code or actions that do not comply with company policies. For example, our support for code reviews ranges from static code analysis-based decision tree rules to multi-level acceptance approvals within the delivery process.
  • Role based security: Permissions can be set very granularly according to the role members have in respect to the project.
  • Cross platform & process Automation: The best way to ensure compliance it to AVOID manual interventions. Clarive allows you to automate every delivery execution step (apart from the coding itself of course) and process workflow. We support this across teams and platforms, making manual activities (other than just approvals) redundant.

Sounds great? Why don’t you take a look at Clarive now? As our customers witness, you can get started quickly. Just download Clarive for free here and try it out yourself.

Get an early start and try Clarive now. Get your custom cloud instance for free.

Today we’re going to see how to deploy an application in Google Play Store with Clarive.

In the following post we’ll see how to automate the compilation of our applications, as well as making its subsequent upload to the Play Store from a mobile app completely automatic. Clarive will be the only tool we’ll use throughout the whole process.
All of this will save you costs, as you will avoid the costs of manually carrying out the compilation and deployment each time a new version of an application is launched.

To develop this, we will use a free Clarive instance. Through the use of Docker containers we will be able to compile and deploy an Android application in Google Play Store by using the rulebook that we will explain below.
The whole process will be managed through what we have configured in the file .clarive.yml that is in the root folder of our repository.


In order to complete this process there are some requtirements, which are as follows:
– The Play Store .json file in order to be able to upload the application automatically.
– The application ready to be compiled and signed automatically.
– A Clarive instance, which you can request for free here

Designing our .clarive.yml

The .clarive.yml file will be where we will define the steps that should be followed for the compilation and deployment process.

Defining our variables

First we will declare the variables that will be used throughout our pipeline process.

  artifact_path: http:///artifacts/repo/
  # Root path to our artifact repository.

  artifact_repo: "public"
  # Name of repository

  artifacts_store_path: "android/app/app-release-{{ ctx.job('change_version') }}.apk"
  # Path inside the repository where the generated APK will be stored

  json_file: "clarive-rulebook.json"
  # Name of our JSON file for its uploading to Play Store

  workspace: "{{ ctx.job('project') }}/{{ ctx.job('repository') }}"
  # Workspace where our development files will be stored

  package_name: codepath.apps.demointroandroid2clarive
  # Pack name for our app in the Play Store

Building our application

Next is the BUILD phase, and here we are going to compile the application and save the generated file in our artifact repository.

Our build.gradle file must be prepared for the automatic digital signing of the application and and its subsequent uploading to the Play Store. In the same way, we must have also previously manually uploaded a first version of the application to the Play Store.

An image with gradle and the SDK source of Android should be enough to make the compilation.

After specifying the Docker image that we will use, we will execute the gradle command within our working directory, so that the compilation and signature of the application can be carried out.

In our particular case, we should use the root user and the sh shell of the image we are using in order to perform the compilation of the application.

    - image:
         name: 'knsit/gradle-android'
         user: 'root'
         runner: 'sh'

     - shell [Compile application]: |
         cd {{ workspace }}/Application_code/app/
         gradle AssembleRelease

To complete the BUILD phase, in which we have compiled the application, we need to save the generated APK file, where the compiled application is located, in our artifact repository:

    - artifact_repo = publish [Store APK in artifacts repository]:
        repository: Public
        to: '${artifacts_store_path}'
        from: '{{ workspace }}/Application_code/app/build/outputs/apk/app-release.apk'
    - log:
        level: info
        msg: Application build finished

Once the file has been saved and stored we will have terminated our BUILD phase, and we will now move on to the DEPLOY phase in order to carry out the deployment of the file.

Deploying to Play Store

In this phase, we need our APK file with the compiled application, and our .json Play Store authentication file in order to carry out the upload automatically.

In this case, the image that we need to make the deployment will be a Docker image using the installed Fastlane, and here we are going to prepare the command we need to execute in order to deploy our application in Play Store.
In this case we will be uploading it to the Alpha phase of the applications.

Our .json file is located within our development files where the application is, which means we can place it within our workspace.

    - image:
        name: 'levibostian/fastlane'

    - shell [Upload application with Fastlane]: |
        fastlane supply --apk .artifacts/{{ artifact_repo }}/{{ artifacts_store_path }} -p {{ package_name }} --json_key {{ workspace }}/{{ json_file }} -a alpha

    - log:
        level: info
        msg: The app has been deployed to the Play Store

In this way, if we carry out the push to our repository, Clarive will automatically run a deployment (CI) and our application will be uploaded onto our Play Store.


Finally, in the POST step we will email the user that has launched the deployment to inform them that it has been completed, and in order for them to check the results.

    - log:
        level: info
        msg: Deploy finished
    - email:
        body: |
          Hello {{ctx.job("user") }},
          <b>{{ ctx.job('name') }}</b> has finished and your app has been deployed. Check it out in <a href= "https://play.google.com/">your Play Console</a>.

          Also your apk file has been stored in your artifacts repository:
              ${ artifact_path }${artifact_repo}/${artifacts_store_path}.

        subject: Application deployed to Play Store
           - ${ctx.job("user")}

To conclude

Now with our .yml file already prepared, we can use Clarive’s interface to visualize the steps that will follow the finalizing of the rulebook.
To start the deployment, we only need to perform the push on the repository, just as we have seen before in other posts. When performing the push, Clarive automatically creates a deployment (Continuous Integration) and executes all the code found in the .yml.

If everything has run correctly, we should be able to see on the monitor how our deployment is being executed.

Job succesfully finished

By doing all of this, we have carried out the whole process of compiling and uploading our application to Google Play Store through the use of Clarive’s rulebooks and different Docker containers.
If we look at our page on the Play Store console, we’ll be able to see a message indicating that we have an application ready and waiting.

APK v2 deployed

APK details

This is a brief example that can serve as a reference. You can configure different environments to deploy on. You can also change the type of operation to be carried out in each phase in a completely customizable way, in such a way that it adjusts to what each person needs for their development work and deployments.

Get an early start and try Clarive now. Get your custom cloud instance for free.

In previous video’s we have deployed Mainframe Git managed application, Java application and Mainframe Endevor packaged up till the QA environment.

All of the deployments for each of the technologies and target platforms were done with Jobs created by the same versioned pipeline rule. Now that we have reached the final deployment to the production environment, we don’t want to deploy individual changesets anymore. We want to group them into a release/sprint and deploy them together.

We will do that in a single job created by the same pipeline as the one used for the changesets.
In summary, with Clarive we deploy in a single job from a single pipeline, multiple technologies to multiple environments on multiple platforms. This assures us a consistent way of deploying.

Get an early start and try Clarive now. Get your custom cloud instance for free.

In this video we will create a webhook in our rulebook that provisions a VM in our Azure instance.

Following the instructions given in the previous blog “DevOps Webservices: a Clarive primer”, we show how easy is everything in Clarive:

Get an early start and try Clarive now. Get your custom cloud instance for free.

So now we can write webservices with Clarive rulebooks. We call them webhooks, but they are actually inbound.

Let me explain. Let’s start with the problem at hand.

The Problem at hand

With Clarive you can write your automation as rulebook files. (You can also write them visually with Clarive EE, but that’s another story.) Rulebooks can do wonderful things, like automating your pipeline: building, testing and deploying apps. They can also be used to provision your infrastructure (think cloud instance or DB) and can be triggered by many different rules, like a topic modification or a branch being pushed. They can also be scheduled to run on a cron.

But what if you want to trigger them from an outside event? Say when a user opens an issue on your issue tracker, or someone pushes something to a repository somewhere? That’s when we wanted to trigger a rule in our rulebook. To do that you can call into Clarive. But instead of running a generic Clarive API that calls a rulebook, we wanted to expose meaningful urls that would make more sense of

What is a Rulebook

Rulebook is a file (or set of files) checked into a Git repository in your Clarive instance. It all starts with the .clarive.yml file.

Your rulebook will contain rules. Rules can be build, test, deploy, or events like topic_modify or repository_update. And then they can be webhook rules. Webhook rules are any events that start with a / slash character. The event name will become an url fragment to be called into.

Webhook rules behave like exposed webservices right into your instance, running within your repository.

Here’s an example:

    - echo: running the hello world webhook

       # do some meaningful stuff here

    - web_response:
       # return something to the caller

Anatomy of a web call

When you call into a rule webhook introducing a url like this:

It goes through the following steps:

  • Authenticates the user (users must be authenticated – via api_key).
    your API key

    This is your user API key

  • Locate your project MyProject and the repository associated MyRepo

  • Find .clarive.yml file
  • Look for a webservice defined with / and called hello_world
  • Execute the echo code and return the result in web_response

A hardcore example: provision an Azure VM

To run this example you can create an instance in our free cloud. That will give you the complete infrastructure you need.

  1. Get a free Azure account if you don’t have one already.
  2. Get your free Clarive cloud instance
  3. Once you get an email with the instructions, you can login in your Clarive.
  4. Create a new project and a git repository associated to it.
  5. Create a story from your project, and that will create automatically a new branch in your repo
    your topic

    Create a topic in Clarive

  6. Create your azure variables that you will need to login using a service principal

    Azure vars

    Your secret vars that allow to login in your Azure

  7. Clone this repo in your local workspace, checkout the actual branch and modify your .clarive.yml with the webservice.

        name: microsoft/azure-cli #docker image to run az commands
        - myoutput = shell: |
            az login --service-principal -u {{ ctx.var('az-service-principal') }} --password {{ ctx.var('az-password') }} --tenant {{ ctx.var('az-tenant') }}
            az vm create --resource-group {{ ctx.var('az-resource-group') }} --name myFirstVM --image centos --admin-username {{ ctx.var('az-vm-username') }} --admin-password {{ ctx.var('az-vm-password') }} --authentication-type password
        - if "{{ myoutput.rc == 0 }}"
           - result =: "VM created succesfully!"
           - result =:"Upps! something went wrong, review your azure instance."
        - web_response:
           body: ${result}
  8. Push the changes (after commiting them, obviously) into your branch, then you have to take your topic to PROD environment where your branch will be merged in master or do it manually.

  9. Your defined webservice will be available so call it with your browser or in your command line using “curl”.

    curl https://{your_instance}.clarive.io/rule/json/{your_project}/{your_repo}/create_azure_vm?api_key={your_user_api_key}
  10. Finally, you can see your VM created in your Azure instance.

    Your VM created in Azure with a Clarive webhook

That’s it. You’ve just created your first DevOps webservice in Clarive 🙂

Get an early start and try Clarive now. Get your custom cloud instance for free.