Elixir logo

As you can see in our previous posts, having a complete pipeline to introduce DevOps in your day-to-day live is easy with Clarive’s Rulebook.

You just have to follow these three simple steps:
1. Get your free Clarive cloud instance
2. Upload your code to your Clarive Project repository.
3. Prepare your rulebook, push your commit and enjoy! (oops, maybe four steps would have been better :))

So let’s get down to business: we will detail the needed code.

Defining our variables

First, we declare the variables that will be used throughout our pipeline process.

  - workspace: "${project}/${repository}"
  - server:  https://<my-clarive>.clarive.io
  - art_path: ${server}/artifacts/repo/${project}

Building our application

In this step, we choose Elixir Docker image using Mix as a building tool.

    - image:
         name: 'elixir'
         runner: 'bash'

     - shell [Compile application]: |
         cd {{ workspace }}
         mix compile
         tar cvf ${project}_${job}.tar _build/dev/lib/

And publish the compiled application to our artifact repository.

    - artifact_repo = publish:
        repository: Public
        to: '${art_path}'
        from: '{{ workspace }}/${project}_${job}.tar'
    - log:
        level: info
        msg: Application build finished

Ready to test

As long as we have our own application tests, this step is as simple as running the right command.

    - image:
         name: 'elixir'
         runner: 'bash'

     - shell: |
         cd {{ workspace }}
         mix test

Deploy wherever we want

Now, it’s time to choose where our app will run. For example, send the tar file to another server and run the app.

   - ship:
       from: '${art_path}/${project}_${job}.tar'
       to: /tmp/remotepath/
       host: ${remote_server}
  - shell:
      cd /tmp/remotepath/
      tar -xvf ${project}_${job}.tar
      mix run

This remote_server could be an AWS instance in PROD environment or another Docker container just to QA.

Happy Ending

Now with our .yml file already prepared, we can use Clarive’s interface to visualize the steps that will follow the finalizing of the rulebook.
To start the deployment, we only need to perform the push on the repository, just as we have seen before in other posts. When performing the push, Clarive automatically creates a deployment (Continuous Integration) and executes all the code found in the .clarive.yml.

Visit Clarive documentation to learn more about the features of this tool.

Increase quality and security of node.js applications which rely on NPM packages.

Developers often create small building blocks of code that solve one particular problem and then “package” this code into a local library following NPM guidelines. A typical application, such as a website, often consists of dozens or hundreds such small node.js packages. Development teams often use these packages to compose larger custom solutions.

While NPM allow teams to exploit the expertise of people who have focused on a particular problem area, residing either inside or outside the local organization, and support teams work together better, sharing talent across projects, we often see companies struggling with the quality of packages being used and as a result looking for ways to control usage better.

Better ways to manage and control what packages are being deployed in their cloud and/or their data centers is vital.

Organizations want to reduce the risk of failure or instability resulting from downloads of the latest version of a required NPM package from the internet, potentially improperly tested.

This video shows how Clarive can help you, making your node.js applications that use NPM packages more secure and stable.

Get an early start and try Clarive now. Get your custom cloud instance for free.

In this video we can see how Clarive checks the availability of certain versions of applications (BankingWeb 2.1.0 and ComplexApp 1.2.0) needed in an environment before deploying a version of another application (ClientApp 1.1.0).

We could consider the applications(BankingWeb and ComplexApp) as pre-requisites for the actual application (ClientApp) for which the deployment is requested.

When the required applications versions (BankingWeb 2.1.0 and ComplexApp 1.2.0) are not in the target environment yet, Clarive will block the deployment until the applications are either deployed first (in a separate job) or added to the same deployment job as the application requiring the pre-requisite applications.

Get an early start and try Clarive now. Get your custom cloud instance for free.

Given the fact that the EU’s GDPR law (replacing the 1995 Data Protection Directive) will go into effect on May 25, almost every software vendor is jumping on the bandwagon to explain how they can help.

Let me be compliant, and follow the stream.

gdpr and Clarive

When reading about the subject online, I noticed that most articles center around the specific business and legal obligations regarding personal data. These articles focus on physical data processing and the data controller obligations to manage processing. Of course!

This is not what I want to write about in this blog however. The GDPR is also expected to impact the software delivery life cycle and its related IT-development processes for organizations that plan to rollout IT projects within the EU.

There are many lifecycle flavors to deliver software today on the market, all waterfall, iterative, or agile based. All of them define the way to manage and control the IT project, from planning to rollout, across the different application layers or modules, and platforms. Common software layers that will be directly impacted by the new GDPR law include of course databases as well as their related architecture, but also data transport, data security, presentation, and application layers… basically potentially every software aspect could be affected!

The impact of GDPR on application delivery

This means if your company intends to continue to roll out systems in the EU, you will have to deal with the new functional and technical requirements introduced by the GDPR like the following (this is not an exhaustive list, only some important ones to make the point):

  • Ensure data protection in the system and the organization, by design and by default (Recital 78 and Article 25)
  • Use data encryption when possible (Recitals 83 and Articles 6-4(e), 32-1(a))
  • Use Data pseudonymization when possible (Recitals 26, 28, 29, 78 and Articles 6-4(e), 25-1, 32-1(a))
  • Anonymize data when possible (Recital 26)
  • Share processing attributes and steps to the data subject in an easy to understand form at the time of data collection, electronically or in writing (Recitals 39, 58 and Articles 12-1, 13-2(a-f))
  • Make data portable to another provider (maybe competitor) (Recital 68 and Articles 13-2(b), 14-2(c), 20)
  • Ensure data is secured, and integrity and confidentiality are maintained, using technical and organizational means under the management of the controller (Recital 49 and Articles 5-1(f), 32-1(b-d))

While a number of these new requirements might be seen as “no-brainer” as they were already part of your software design, others will trigger new requirements that need to be implemented fast and with quality before the law is enforced.

Failing seems to be not really an option. Not complying with the GDPR requirements could result in very serious penalties! As I could read, the worst-case scenario could a fine of €20 million or 4 percent of the company’s previous year’s total global revenue, whichever is greater. Ouch!

The clock is ticking, how do you track progress and ensure compliance?

With only a few more months left, how are you progressing with the delivery of these new requirements? Can you truly track requirement progress throughout your software delivery chain? How confident are you all policies are correctly implemented?

When speaking to bigger clients, this is often their biggest challenge: They have deployed multiple tools to support software delivery. Coding is fragmented, and the delivery toolchain is often poorly integrated, leading to extensive manual activities within the delivery process and a lack to end-to-end visibility and traceability.

At Clarive we believe in SIMPLICITY. Your software delivery toolchain should be as simple as possible, requiring the minimal set of tools to get the work done fast, with quality, and in a transparent way. For smaller organizations and startups, this can be a single tool: Clarive! Bigger organizations often do need multiple tools to support multiple platforms, but they miss overall orchestration and automation. Not those that use Clarive!

As a simple, lean application delivery platform, Clarive will deliver you the traceability you need to track progress on your GDPR (and other) requirements with ease.

Clarive not only helps with end-to-end tracking, its powerful role-based, ruling and workflow automation system also offers capabilities that will help you to ensure everyone on the team remains in compliance with the company’s legal and other requirements, like those for GDPR. For example:

  • Workflow rules: Workflow rules allows you to accept/reject code or actions that do not comply with company policies. For example, our support for code reviews ranges from static code analysis-based decision tree rules to multi-level acceptance approvals within the delivery process.
  • Role based security: Permissions can be set very granularly according to the role members have in respect to the project.
  • Cross platform & process Automation: The best way to ensure compliance it to AVOID manual interventions. Clarive allows you to automate every delivery execution step (apart from the coding itself of course) and process workflow. We support this across teams and platforms, making manual activities (other than just approvals) redundant.

Sounds great? Why don’t you take a look at Clarive now? As our customers witness, you can get started quickly. Just download Clarive for free here and try it out yourself.

Get an early start and try Clarive now. Get your custom cloud instance for free.

Today we’re going to see how to deploy an application in Google Play Store with Clarive.

In the following post we’ll see how to automate the compilation of our applications, as well as making its subsequent upload to the Play Store from a mobile app completely automatic. Clarive will be the only tool we’ll use throughout the whole process.
All of this will save you costs, as you will avoid the costs of manually carrying out the compilation and deployment each time a new version of an application is launched.

To develop this, we will use a free Clarive instance. Through the use of Docker containers we will be able to compile and deploy an Android application in Google Play Store by using the rulebook that we will explain below.
The whole process will be managed through what we have configured in the file .clarive.yml that is in the root folder of our repository.


In order to complete this process there are some requtirements, which are as follows:
– The Play Store .json file in order to be able to upload the application automatically.
– The application ready to be compiled and signed automatically.
– A Clarive instance, which you can request for free here

Designing our .clarive.yml

The .clarive.yml file will be where we will define the steps that should be followed for the compilation and deployment process.

Defining our variables

First we will declare the variables that will be used throughout our pipeline process.

  artifact_path: http:///artifacts/repo/
  # Root path to our artifact repository.

  artifact_repo: "public"
  # Name of repository

  artifacts_store_path: "android/app/app-release-{{ ctx.job('change_version') }}.apk"
  # Path inside the repository where the generated APK will be stored

  json_file: "clarive-rulebook.json"
  # Name of our JSON file for its uploading to Play Store

  workspace: "{{ ctx.job('project') }}/{{ ctx.job('repository') }}"
  # Workspace where our development files will be stored

  package_name: codepath.apps.demointroandroid2clarive
  # Pack name for our app in the Play Store

Building our application

Next is the BUILD phase, and here we are going to compile the application and save the generated file in our artifact repository.

Our build.gradle file must be prepared for the automatic digital signing of the application and and its subsequent uploading to the Play Store. In the same way, we must have also previously manually uploaded a first version of the application to the Play Store.

An image with gradle and the SDK source of Android should be enough to make the compilation.

After specifying the Docker image that we will use, we will execute the gradle command within our working directory, so that the compilation and signature of the application can be carried out.

In our particular case, we should use the root user and the sh shell of the image we are using in order to perform the compilation of the application.

    - image:
         name: 'knsit/gradle-android'
         user: 'root'
         runner: 'sh'

     - shell [Compile application]: |
         cd {{ workspace }}/Application_code/app/
         gradle AssembleRelease

To complete the BUILD phase, in which we have compiled the application, we need to save the generated APK file, where the compiled application is located, in our artifact repository:

    - artifact_repo = publish [Store APK in artifacts repository]:
        repository: Public
        to: '${artifacts_store_path}'
        from: '{{ workspace }}/Application_code/app/build/outputs/apk/app-release.apk'
    - log:
        level: info
        msg: Application build finished

Once the file has been saved and stored we will have terminated our BUILD phase, and we will now move on to the DEPLOY phase in order to carry out the deployment of the file.

Deploying to Play Store

In this phase, we need our APK file with the compiled application, and our .json Play Store authentication file in order to carry out the upload automatically.

In this case, the image that we need to make the deployment will be a Docker image using the installed Fastlane, and here we are going to prepare the command we need to execute in order to deploy our application in Play Store.
In this case we will be uploading it to the Alpha phase of the applications.

Our .json file is located within our development files where the application is, which means we can place it within our workspace.

    - image:
        name: 'levibostian/fastlane'

    - shell [Upload application with Fastlane]: |
        fastlane supply --apk .artifacts/{{ artifact_repo }}/{{ artifacts_store_path }} -p {{ package_name }} --json_key {{ workspace }}/{{ json_file }} -a alpha

    - log:
        level: info
        msg: The app has been deployed to the Play Store

In this way, if we carry out the push to our repository, Clarive will automatically run a deployment (CI) and our application will be uploaded onto our Play Store.


Finally, in the POST step we will email the user that has launched the deployment to inform them that it has been completed, and in order for them to check the results.

    - log:
        level: info
        msg: Deploy finished
    - email:
        body: |
          Hello {{ctx.job("user") }},
          <b>{{ ctx.job('name') }}</b> has finished and your app has been deployed. Check it out in <a href= "https://play.google.com/">your Play Console</a>.

          Also your apk file has been stored in your artifacts repository:
              ${ artifact_path }${artifact_repo}/${artifacts_store_path}.

        subject: Application deployed to Play Store
           - ${ctx.job("user")}

To conclude

Now with our .yml file already prepared, we can use Clarive’s interface to visualize the steps that will follow the finalizing of the rulebook.
To start the deployment, we only need to perform the push on the repository, just as we have seen before in other posts. When performing the push, Clarive automatically creates a deployment (Continuous Integration) and executes all the code found in the .yml.

If everything has run correctly, we should be able to see on the monitor how our deployment is being executed.

Job succesfully finished

By doing all of this, we have carried out the whole process of compiling and uploading our application to Google Play Store through the use of Clarive’s rulebooks and different Docker containers.
If we look at our page on the Play Store console, we’ll be able to see a message indicating that we have an application ready and waiting.

APK v2 deployed

APK details

This is a brief example that can serve as a reference. You can configure different environments to deploy on. You can also change the type of operation to be carried out in each phase in a completely customizable way, in such a way that it adjusts to what each person needs for their development work and deployments.

Get an early start and try Clarive now. Get your custom cloud instance for free.

In previous video’s we have deployed Mainframe Git managed application, Java application and Mainframe Endevor packaged up till the QA environment.

All of the deployments for each of the technologies and target platforms were done with Jobs created by the same versioned pipeline rule. Now that we have reached the final deployment to the production environment, we don’t want to deploy individual changesets anymore. We want to group them into a release/sprint and deploy them together.

We will do that in a single job created by the same pipeline as the one used for the changesets.
In summary, with Clarive we deploy in a single job from a single pipeline, multiple technologies to multiple environments on multiple platforms. This assures us a consistent way of deploying.

Get an early start and try Clarive now. Get your custom cloud instance for free.

In this video we will create a webhook in our rulebook that provisions a VM in our Azure instance.

Following the instructions given in the previous blog “DevOps Webservices: a Clarive primer”, we show how easy is everything in Clarive:

Get an early start and try Clarive now. Get your custom cloud instance for free.

Enterprises are in constant search of ways to deliver faster, ideally in a continuous/frequent fashion, with full traceability and control, and of course with excellent business quality.

DevOps as an approach continues to get a lot of attention and support in achieving these goals.

As readers can find in other blogs and articles on DevOps, DevOps aims at bringing Development and Operations closer together, allowing better collaboration between them, and facilitating a smoother handover during the delivery process. Automation remains a critical component in the technical implementation.

What strikes me all the time is that, when I discuss the subject with customers, analysts, and other colleagues in the field, very quickly we seem to end up in a DevOps toolchain discussion that gets cluttered by numerous best of bread but point products that one way or another need to integrate or at least work together to get the delivery job “done”.

Why is that? Why does a majority end up with (too) many tools within the delivery toolchain?

We fail to search for simplicity

If you read about what analyst like Gartner or Forrester are writing about implementing DevOps, if you read closer about what Lean IT stands for, then a common theme that will surface is SIMPLICITY.

If you want to enhance collaboration between delivery stakeholders, if you want to make the handover of deliverables easier, if you want to automate the end-to-end delivery process, then you should look for ways to make your delivery toolchain simpler, not more complex.

As part of the analysis and continual improvement process of the delivery value stream, we look for better ways to do specific tasks. We should in addition carefully look at alternatives to avoid manual activities in processes when possible. This is just applying common Lean practices in the context of application delivery.

Many (bigger) enterprises remain overly siloed, and this often results in suboptimal improvement cycles. When developers face issues with the build process, they look on the web for better build support for their specific platform, ideally in open source, so they can “tweak” it for their needs if required (it is often a matter of retained “control”). If quality suffers, developers and testers can do their own quest to improve quality from their viewpoint, leading to the selection and usage of specific point products by each team, sometimes not even aware of their respective choice.

I can continue with more examples in the same trend, but the pattern is obvious: When teams continue to look for the best solution “within their silo”, then most of the time organizations will end up in an overly complex and tool rich delivery toolchain.

Look at delivery in a holistic way, from a business perspective

The above approach is not respecting some important Lean principles though: Look at the value stream from a customer’s perspective, in a holistic way, creating flow while eliminating waste.

These are some of the things you should look at while analysing and improving your delivery toolchain:

  • How does demand/change flow into the process? How is the selection/acceptance process handled? How is delivery progress tracked?

  • How automated are individual delivery steps (like build, provision, test, deploy)? How is the delivery process/chain automated itself? Any manual activities happening? Why? Does automation cover across ALL platforms, or only a subset?

In case you would like to learn around this subject, I can recommend reading the following ebook on the Clarive website: “Practical Assessment Guide for DevOps readiness within a hybrid enterprise

Clarive CLEAN stack

A C.L.E.A.N way to deliver quality

At Clarive we believe simplicity is vital for sustained DevOps and delivery success.
We designed the C.L.E.A.N stack exactly with this in mind:

Clarive Lean & Effective Automation requiring Nothing else for successful delivery.

Indeed, Clarive allows you to:

  • Implement Lean principles and accurate measurement and reporting with real-time and end-to-end insight

  • Implement effective and pragmatic automation of both delivery processes as well as delivery execution steps such as build, provision, test, and deploy.

  • Do all this from within the same product, so there is no need to use anything else to get the job done!! No real need to implement artefact repositories, workflow tools, or anything else, just Clarive will do!.

Of course, in case you have made investment in tooling already, Clarive will collaborate in a bi-directional way to get you started quickly. After all, DMAIC or other improvement cycles are cyclic and continual, so you can further refine or improve after you got started if you desire more simplicity…

This is an evolution I have seen many of our clients going through: they initially look at and start with Clarive because they have certain automation or orchestration needs. Then they find out they can do with Clarive what they did with Jenkins, and switch to Clarive, then they learn about Clarive’s CI repository and decide to eliminate Nexus. As Clarive has a powerful and integrated workflow automation capability, they realise they could also do without Jira and Bitbucket… and so on. It has saved companies effort and cost doing so.

In case you are interested in Clarive, download it for free here. See also some sample screenshots of the tool below.

Clarive tool

Clarive tool_screenshot

Clarive tool_screenshot_deploy package

Get an early start and try Clarive now. Get your custom cloud instance for free.