Elixir logo

Environment provisioning is a key part of a continuous delivery process. The idea is simple: we should not only build, test and deploy application code, but also the underlying application environment.

Are your environments being provisioned on-demand as applications deploy? Can Devs request new environments to fit changes in how the application is built? Is environment configuration and modeling built into the application deployment process?

What is the “environment”

The application environment consists in 3 main areas:

  • Infrastructure

  • Configuration

  • Dependencies

Infrastructure is the most important element of the environment, as it defines where the application will run, the specific configuration needs and how dependencies need to interact with the application.

Configuration is the next most important aspect of the application environment. Configuration dictates both how the application behaves in a given infrastructure and how the infrastructure behaves in relation to the underlying application.

Dependencies are all the different modules or systems an application dependes on, from libraries to services or other applications.

What is infrastructure today?

The concept of infrastructure refers to the components, hardware, and software, needed to operate an application service or system. But, as hardware has been abstracted away in favor of scalable, reliable and affordable solutions, the true definition of infrastructure is something different to every application.

We could say infrastructure now advances almost as fast as application technologies and languages themselves. Infrastructure has melded with the application in such a way that now we have to basically pick the infrastructure as part of the architecture decisions.

Infrastructure has never stopped evolving:

  • virtualization, or how to provision infrastructure in minutes instead of days
  • containerization, or how to provision infrastructure in seconds instead of minutes
  • the cloud, or how to provision infrastructure you do not own
  • serverless, or how not to provision infrastructure as it will provision itself on demand
  • and so on…

And the IT infrastructure is not just vertical, but also horizontal, as platforms can also connect many services pods and execution silos.

How about configuration and dependencies?

Configuration and dependencies are very profound topics that deserve their own articles. But let’s say that today both tend to be containerized one way or the other, meaning both programming languages and infrastructure technologies promote the packaging of environment configuration and dependencies as part of the deliverable.

Business Challenges

For any organization to successfully test and release new applications or application versions, the appropriate environment(s) needs first to be in place. The enterprise follows many different procedures for supplying environments to applications. It may be manual or automated, or a combination of both.

It also may be in the hands of different teams or roles within the enterprise, from developers to operations and sysadmins.

Environment provisioning is how an organization manages infrastructure before, during and after the lifespan of an application or service. Environment provisioning is independent of Services Oriented Architecture (SOA), micro-services or a full MVC application, or any other execution structure as running services or application all need to

  • Before: setup development and unit-testing environments and basic SCM controls. Provision environments as the delivery flow advances.

  • During: applications change with new requirements or bugs, and they may need to scale current or provision new features of its infrastructure as they go.

  • After: when done, decommissioning environments is important to deallocate valuable resources.

Business Benefits

Now why would you want to automate environment provisioning? Or yet, how would you demonstrate that spending time automating and building provisioning into continuous deployment can be beneficial at your organization?

Reduce Average Time to Provision

This KPI offers a direct measurement of the end-to-end process for provisioning new infrastructure, including both physical and virtual machines, processing power, storage, databases, middleware, communication and networking infrastructure among others. IT managers and business users can employ this metric as an indication of the degree to which IT is supporting the ability needs of the business.

Reduce Service Complexity

Centralizing all environment automation operations in one place simplifies making decisions that affect the business, permitting a quick turnover of scalability investments and impact analysis and change execution when rearranging infrastructure resources to meet new business requirements.

Get Service Allocation Insights

Measure how to and for how long infrastructure is being used, how fast it’s being delivered and disposed of and what business requirements are behind each request.

Greater service efficiency and decreased costs

Provisioning and disposing of environments following establish patterns and templates help reduce waste and complexity. Modular environments are also easier to debug and scale as changes only impact in one application instead of a cluster of applications. The bottomline is that modular environment reduce the costs of running tests and are easier to throttle in production, which also translate in applications that only consume what they need at a given time.

Use Cases

Here a sample of use cases that triggers an organization to implement a provisioning solution:

  • Onboarding new application teams have to deal with organizational complexity and adhere to standards and release policies. Provisioning a baseline environment catalog will help circumvent organizational obstacles and technological challenges.

  • Coordinating code releasing and provisioning to make sure infrastructure changes arrive just-in-time with the code that requires it.

  • Dev and QA environment provisioning, to simplify and control the process of procuring and automating environment generation.

  • Self-service provisioning, offering users a “catalog” with a set of service requests and tasks that can be launched and managed.

  • Infrastructure code is being pushed and continuously deployed by developers (ie. Dockerfiles or Chef recipes), but require more control and end-to-end visibility.

Get started today

If you feel you’re not taking full advantage of environment provisioning, here’s a few ideas for you to get started ASAP:

  • When starting new apps, or designing architecture, try to predict how environments will be provisioned at different stages during the delivery pipeline, specially QA and Production.

  • Build apps that can have features tested on-demand by spinning throw-away environments.

  • Have your DevOps processes (ie. scripts, builds) run in containers too.

  • Add environment provisioning to your CI/CD pipeline.

  • Offer users a way to control and define the environment rules and infrastructure needs, as code if possible.

In general, containers are a great way to go thanks to it’s infrastructure as code nature and natural DevOps fit. Serverless on the other hand can abstract away many environment concerns and make better use of resources as applications grow.

Get an early start and try Clarive now. Get your custom cloud instance for free.

Elixir logo

As you can see in our previous posts, having a complete pipeline to introduce DevOps in your day-to-day live is easy with Clarive’s Rulebook.

You just have to follow these three simple steps:
1. Get your free Clarive cloud instance
2. Upload your code to your Clarive Project repository.
3. Prepare your rulebook, push your commit and enjoy! (oops, maybe four steps would have been better :))

So let’s get down to business: we will detail the needed code.

Defining our variables

First, we declare the variables that will be used throughout our pipeline process.

  - workspace: "${project}/${repository}"
  - server:  https://<my-clarive>.clarive.io
  - art_path: ${server}/artifacts/repo/${project}

Building our application

In this step, we choose Elixir Docker image using Mix as a building tool.

    - image:
         name: 'elixir'
         runner: 'bash'

     - shell [Compile application]: |
         cd {{ workspace }}
         mix compile
         tar cvf ${project}_${job}.tar _build/dev/lib/

And publish the compiled application to our artifact repository.

    - artifact_repo = publish:
        repository: Public
        to: '${art_path}'
        from: '{{ workspace }}/${project}_${job}.tar'
    - log:
        level: info
        msg: Application build finished

Ready to test

As long as we have our own application tests, this step is as simple as running the right command.

    - image:
         name: 'elixir'
         runner: 'bash'

     - shell: |
         cd {{ workspace }}
         mix test

Deploy wherever we want

Now, it’s time to choose where our app will run. For example, send the tar file to another server and run the app.

   - ship:
       from: '${art_path}/${project}_${job}.tar'
       to: /tmp/remotepath/
       host: ${remote_server}
  - shell:
      cd /tmp/remotepath/
      tar -xvf ${project}_${job}.tar
      mix run

This remote_server could be an AWS instance in PROD environment or another Docker container just to QA.

Happy Ending

Now with our .yml file already prepared, we can use Clarive’s interface to visualize the steps that will follow the finalizing of the rulebook.
To start the deployment, we only need to perform the push on the repository, just as we have seen before in other posts. When performing the push, Clarive automatically creates a deployment (Continuous Integration) and executes all the code found in the .clarive.yml.

Visit Clarive documentation to learn more about the features of this tool.

Increase quality and security of node.js applications which rely on NPM packages.

Developers often create small building blocks of code that solve one particular problem and then “package” this code into a local library following NPM guidelines. A typical application, such as a website, often consists of dozens or hundreds such small node.js packages. Development teams often use these packages to compose larger custom solutions.

While NPM allow teams to exploit the expertise of people who have focused on a particular problem area, residing either inside or outside the local organization, and support teams work together better, sharing talent across projects, we often see companies struggling with the quality of packages being used and as a result looking for ways to control usage better.

Better ways to manage and control what packages are being deployed in their cloud and/or their data centers is vital.

Organizations want to reduce the risk of failure or instability resulting from downloads of the latest version of a required NPM package from the internet, potentially improperly tested.

This video shows how Clarive can help you, making your node.js applications that use NPM packages more secure and stable.

Get an early start and try Clarive now. Get your custom cloud instance for free.

Continuous integration (CI) is now becoming a standard in all software projects. With this, a pipeline is executed for each new feature or change in code that is made, and this allows you to carry out a series of steps to ensure that nothing has been “broken”.

One of the main steps that needs to be undertaken in the CI is the implementation of tests, both unit tests and regression tests. The latter can be done in the nightly build that Clarive is responsible for launching on a daily basis, at the time that we pre-define within our Releases.

Unit tests must be run whenever there has been any change in the code. This brings us to our first step. The unit tests must be run with each change in source code using a transfer (Continuous Integration) and the tests will be launched in the nightly builds of each new Release. This will ensure that the new version of our product will go onto the market without any bugs or problems in the features that worked well in the current versions.

At the end of this post the reader should know how to:
Integrate tests into a Clarive rulebook.
Publish files in an artifact repository.
– Send notifications to users.

Ready to start

To start this step-by-step guide you need a copy of Clarive installed on your server (you can request a free one here).


For this post, we will use the following workflow:

1- User commit tests.
2- Clarive runs rulebook.
3- Mocha run tests.
4- Clarive post a message in Slack with the report.

For the first part of the development, we will assume that the developer has already written his tests. In this example we will use a simple “Hello world” and some example tests that you can find here and which will use the library expect.js, which will be installed during the running of the rulebook.

Workspace files:

In this example within our git repository we have the following file structure:

├── .clarive.yml
├── package.json
├── src
│   ├── add.js
│   ├── cli.js
│   ├── divide.js
│   ├── helloWorld.js
│   ├── multiply.js
│   ├── sort.js
│   └── subtract.js
└── test
    ├── helloWorld.tests.js
    ├── test-cli.tests.js
    └── test.tests.js

.clarive.yml: The rulebook file that Clarive runs
package.json: NPM configuration file. Within this file we have defined a command to run the tests that are found in the test folder:

"scripts": {
    "test": "mocha --reporter mochawesome --reporter-options reportDir=/clarive/,reportFilename=report,reportPageTitle=Results,inline=true"

src: Folder where we find the source code.
tests: Folder where the test files are located.

Writing a rulebook

The rulebook will be responsible for running the tests and generating a report to notify the user of the results of the tests.

First phase: Running the tests

As we can see in the file skeleton clarive.yml, there is a step named TEST that we will use to run the tests:

    - log:
        level: info
        msg: The app will be tested here

But first, let’s define some global variables:

  - workspace: "${project}/${repository}"
  - server:  https://<my-clarive>.clarive.io
  - art_path: ${server}/artifacts/repo/test-reports

We can see from the code that we are going to use node.js with mocha, chai and expectjs as libraries to run the tests. And thanks to Docker images, we can use a container with mocha and chai already installed, and this means that you only need to install expectjs using the rulebook.

We can now specify the Docker image we’re going to use during this TEST phase and then install the expectjs library from our working directory and, finally, update our package.json:

  image: mocha-chai-report
    - shell:
        cmd: |
             cd ${workspace}
             npm install expect.js
             npm install

Next, we just need to run the tests. As we have already defined the command to run the tests in our json package, we only need to run npm test:

  image: mocha-chai-report
    - shell:
        cmd: |
             cd ${workspace}
             npm install expect.js
             npm install
             npm test

Second phase: Publish report

When you finish running the tests, an HTML report will be generated with all the results, thanks to the mochawesome library installed in the Docker image. We will now publish this report in our artifact repository:

- publish_report = publish:
        repository: Test reports
        from: 'report.html'
        to: "test-reports/{{ ctx.job('name') }}/"
    - log:
        level: info
        msg: "Report generated successfully!"

Now, we are now going to complete the POST step. During this step we will inform the users that we want to complete the deployment.

Third phase: Notifications

We need to post the message in Slack, with a link to the report. To do this, we will have to have our WebHook configured in Slack and in Clarive:

Now all that remains is to add the sending of the message to the POST step:
From the Slack API any user can configure their own messages. In this case we have configured a message in which the user, on receiving the message, will be able to access the report from the same channel:

- slack_post:
          webhook: SlackIncomingWebhook-1
          text: ":loudspeaker: Job *{{ ctx.job('name') }}* finished."
          payload: {
              "attachments": [
                "text": "Choose an action to do",
                "color": "#3AA3E3",
                "attachment_type": "default",
                "actions": [
                        "name": "report",
                        "text": "View test report :bar_chart:",
                        "type": "button",
                        "style": "primary",
                        "value": "report",
                        "url": "${art_path}/test-reports/{{ ctx.job('name') }}/report.html"
                        "name": "open",
                        "text": "Open Monitor in Clarive",
                        "type": "button",
                        "value": "open",
                        "url": "${server}/r/job/monitor"

Lets make continuous integration!

Using Clarive, we can create a Story where we will attach all the code. If we haven’t already created a Project, we can create one using the Admin panel:

Then we create a repository of artifacts (a public one in this case):

Once we have carried out the previous steps we will create the Story in Clarive:

We’ll now change the state to “In Progress” and Clarive will automatically create a branch in the repository:

From the console we’ll clone the repository and change to the branch we’ve just created:

We’ll now place all our files in this branch and push them to the repository:

Clarive will automatically generate the deployment:

After finished, we will received the notification on the Slack channel that we’ve selected:

Finally, we can check test report in the artifact repository.

Next Steps:

In Clarive – After making this first contact with Clarive, our Story can be assigned to a Release (one that was previously created from the release menu) and, after changing the state to “Ready”, the developed feature will be automatically merged into the branch of therelease.

In Rulebook – Well, this is a simple example. The rulebook can be as complex as the user wants it to be, for example, by adding if statements, adding elements in the different deployments. For example, if something has gone wrong, create a new Issue in Clarive automatically, only run tests if the “tests/” folder has been modified in the commit…the possibilities are limitless!.

Visit Clarive documentation to learn more about the features of this tool.


Spoiler alert: if you like short product cycles to deliver features that make your users very happy, you probably already know the answer.

We’ve just released a great guide to help startups getting started with delivering lean software. Get your copy here if you haven’t already.

Lean application delivery is the part of lean startup methodology that deals with how software products are built, how to prioritize, how to track and measure and how to automate every aspect of the pipeline.

It’s basically the marriage of DevOps and your workflow: be it agile, kanban, scrumban or, yes, waterfall.

Lean Delivery:
Lean DevOps?
Lean Agile?

As we say in the guide, this is about your journey to lean nirvana: a just-in-time flow to deliver value to your users.

Tools like Trello

Postits on a wall or simple tools like Trello are a great way to start lean. Trello is no frills, low features, but sucks when you need to do anything that transcends organizing simple tasks. But it does get a team collaborating on goals fast.

Github, Gitlab or Bitbucket can get you coding and building stuff… but what stuff? For whom? Why? How do you deliver it? How do you align with the business plan?

While marketing and sales are getting their job done with HubSpot, engineers and product people are fiddling with the gruesome toolchain.

Are the tools running your team, or the other way around? Aha, Jira and Pivotal Tracker can get you very far. Heck, so far that you probably could spend your whole life in just perfecting yourself with these tools. But that’s not how a startup works. There’s no time. You need to define products, get people building, and you probably would need at least 4 or 5 tools just to:

  • Define product, align with goals and user value;

  • Code, track, deploy;

  • Automate, measure, iterate!

You need your product people to collaborate. And you need to automate everything. From goals, to ideas, to the DevOps pipeline. Just get your team to deliver software the way it’s meant to be: lovable products that iterate fast.

Batteries Inside

Our guide covers a few of our favorite topics in lean delivery, enough for a quick 10 minute read through. Here’s hint of some of the topics covered in the guide:

Define your flow before picking your tools

Or how to avoid bloating your startup with out-of-the-box, out-of-place processes from picking tools before you pick the process.

Delivering software is not just an engineering thing

Or how to build traction and make products your users love and business sells.

Emotion is a gauge

Be sensitive and measure emotion correctly.

Have a place for ideas

Or how to nurture and follow up on things that will bring value to your users.

MVP all the time: break down work

Or how to deliver value while keep your team and users motivated.

Isolate changes

Or how to be able to put a release together just-in-time instead of building really huge “develop” branches.

Measure your process, improve fast

Or how to keep your team delivering frequent releases.

Eat your own dog food

Or how to avoid delivering software that will bug your users down.

Avoid release anxiety

Or how to fine tune your iteration so that users, engineers and management is continuously happy.

It’s only done when it’s in production

Do I need to say more?

I hope this is good enough to get you started. There’s a lot of literature out there on how to build your startup to be lean and mean. Learn, measure, iterate!

Go get your Guide to Lean Delivery.

Today we’re going to see how to deploy an application in Google Play Store with Clarive.

In the following post we’ll see how to automate the compilation of our applications, as well as making its subsequent upload to the Play Store from a mobile app completely automatic. Clarive will be the only tool we’ll use throughout the whole process.
All of this will save you costs, as you will avoid the costs of manually carrying out the compilation and deployment each time a new version of an application is launched.

To develop this, we will use a free Clarive instance. Through the use of Docker containers we will be able to compile and deploy an Android application in Google Play Store by using the rulebook that we will explain below.
The whole process will be managed through what we have configured in the file .clarive.yml that is in the root folder of our repository.


In order to complete this process there are some requtirements, which are as follows:
– The Play Store .json file in order to be able to upload the application automatically.
– The application ready to be compiled and signed automatically.
– A Clarive instance, which you can request for free here

Designing our .clarive.yml

The .clarive.yml file will be where we will define the steps that should be followed for the compilation and deployment process.

Defining our variables

First we will declare the variables that will be used throughout our pipeline process.

  artifact_path: http:///artifacts/repo/
  # Root path to our artifact repository.

  artifact_repo: "public"
  # Name of repository

  artifacts_store_path: "android/app/app-release-{{ ctx.job('change_version') }}.apk"
  # Path inside the repository where the generated APK will be stored

  json_file: "clarive-rulebook.json"
  # Name of our JSON file for its uploading to Play Store

  workspace: "{{ ctx.job('project') }}/{{ ctx.job('repository') }}"
  # Workspace where our development files will be stored

  package_name: codepath.apps.demointroandroid2clarive
  # Pack name for our app in the Play Store

Building our application

Next is the BUILD phase, and here we are going to compile the application and save the generated file in our artifact repository.

Our build.gradle file must be prepared for the automatic digital signing of the application and and its subsequent uploading to the Play Store. In the same way, we must have also previously manually uploaded a first version of the application to the Play Store.

An image with gradle and the SDK source of Android should be enough to make the compilation.

After specifying the Docker image that we will use, we will execute the gradle command within our working directory, so that the compilation and signature of the application can be carried out.

In our particular case, we should use the root user and the sh shell of the image we are using in order to perform the compilation of the application.

    - image:
         name: 'knsit/gradle-android'
         user: 'root'
         runner: 'sh'

     - shell [Compile application]: |
         cd {{ workspace }}/Application_code/app/
         gradle AssembleRelease

To complete the BUILD phase, in which we have compiled the application, we need to save the generated APK file, where the compiled application is located, in our artifact repository:

    - artifact_repo = publish [Store APK in artifacts repository]:
        repository: Public
        to: '${artifacts_store_path}'
        from: '{{ workspace }}/Application_code/app/build/outputs/apk/app-release.apk'
    - log:
        level: info
        msg: Application build finished

Once the file has been saved and stored we will have terminated our BUILD phase, and we will now move on to the DEPLOY phase in order to carry out the deployment of the file.

Deploying to Play Store

In this phase, we need our APK file with the compiled application, and our .json Play Store authentication file in order to carry out the upload automatically.

In this case, the image that we need to make the deployment will be a Docker image using the installed Fastlane, and here we are going to prepare the command we need to execute in order to deploy our application in Play Store.
In this case we will be uploading it to the Alpha phase of the applications.

Our .json file is located within our development files where the application is, which means we can place it within our workspace.

    - image:
        name: 'levibostian/fastlane'

    - shell [Upload application with Fastlane]: |
        fastlane supply --apk .artifacts/{{ artifact_repo }}/{{ artifacts_store_path }} -p {{ package_name }} --json_key {{ workspace }}/{{ json_file }} -a alpha

    - log:
        level: info
        msg: The app has been deployed to the Play Store

In this way, if we carry out the push to our repository, Clarive will automatically run a deployment (CI) and our application will be uploaded onto our Play Store.


Finally, in the POST step we will email the user that has launched the deployment to inform them that it has been completed, and in order for them to check the results.

    - log:
        level: info
        msg: Deploy finished
    - email:
        body: |
          Hello {{ctx.job("user") }},
          <b>{{ ctx.job('name') }}</b> has finished and your app has been deployed. Check it out in <a href= "https://play.google.com/">your Play Console</a>.

          Also your apk file has been stored in your artifacts repository:
              ${ artifact_path }${artifact_repo}/${artifacts_store_path}.

        subject: Application deployed to Play Store
           - ${ctx.job("user")}

To conclude

Now with our .yml file already prepared, we can use Clarive’s interface to visualize the steps that will follow the finalizing of the rulebook.
To start the deployment, we only need to perform the push on the repository, just as we have seen before in other posts. When performing the push, Clarive automatically creates a deployment (Continuous Integration) and executes all the code found in the .yml.

If everything has run correctly, we should be able to see on the monitor how our deployment is being executed.

Job succesfully finished

By doing all of this, we have carried out the whole process of compiling and uploading our application to Google Play Store through the use of Clarive’s rulebooks and different Docker containers.
If we look at our page on the Play Store console, we’ll be able to see a message indicating that we have an application ready and waiting.

APK v2 deployed

APK details

This is a brief example that can serve as a reference. You can configure different environments to deploy on. You can also change the type of operation to be carried out in each phase in a completely customizable way, in such a way that it adjusts to what each person needs for their development work and deployments.

Get an early start and try Clarive now. Get your custom cloud instance for free.

Enterprises are in constant search of ways to deliver faster, ideally in a continuous/frequent fashion, with full traceability and control, and of course with excellent business quality.

DevOps as an approach continues to get a lot of attention and support in achieving these goals.

As readers can find in other blogs and articles on DevOps, DevOps aims at bringing Development and Operations closer together, allowing better collaboration between them, and facilitating a smoother handover during the delivery process. Automation remains a critical component in the technical implementation.

What strikes me all the time is that, when I discuss the subject with customers, analysts, and other colleagues in the field, very quickly we seem to end up in a DevOps toolchain discussion that gets cluttered by numerous best of bread but point products that one way or another need to integrate or at least work together to get the delivery job “done”.

Why is that? Why does a majority end up with (too) many tools within the delivery toolchain?

We fail to search for simplicity

If you read about what analyst like Gartner or Forrester are writing about implementing DevOps, if you read closer about what Lean IT stands for, then a common theme that will surface is SIMPLICITY.

If you want to enhance collaboration between delivery stakeholders, if you want to make the handover of deliverables easier, if you want to automate the end-to-end delivery process, then you should look for ways to make your delivery toolchain simpler, not more complex.

As part of the analysis and continual improvement process of the delivery value stream, we look for better ways to do specific tasks. We should in addition carefully look at alternatives to avoid manual activities in processes when possible. This is just applying common Lean practices in the context of application delivery.

Many (bigger) enterprises remain overly siloed, and this often results in suboptimal improvement cycles. When developers face issues with the build process, they look on the web for better build support for their specific platform, ideally in open source, so they can “tweak” it for their needs if required (it is often a matter of retained “control”). If quality suffers, developers and testers can do their own quest to improve quality from their viewpoint, leading to the selection and usage of specific point products by each team, sometimes not even aware of their respective choice.

I can continue with more examples in the same trend, but the pattern is obvious: When teams continue to look for the best solution “within their silo”, then most of the time organizations will end up in an overly complex and tool rich delivery toolchain.

Look at delivery in a holistic way, from a business perspective

The above approach is not respecting some important Lean principles though: Look at the value stream from a customer’s perspective, in a holistic way, creating flow while eliminating waste.

These are some of the things you should look at while analysing and improving your delivery toolchain:

  • How does demand/change flow into the process? How is the selection/acceptance process handled? How is delivery progress tracked?

  • How automated are individual delivery steps (like build, provision, test, deploy)? How is the delivery process/chain automated itself? Any manual activities happening? Why? Does automation cover across ALL platforms, or only a subset?

In case you would like to learn around this subject, I can recommend reading the following ebook on the Clarive website: “Practical Assessment Guide for DevOps readiness within a hybrid enterprise

Clarive CLEAN stack

A C.L.E.A.N way to deliver quality

At Clarive we believe simplicity is vital for sustained DevOps and delivery success.
We designed the C.L.E.A.N stack exactly with this in mind:

Clarive Lean & Effective Automation requiring Nothing else for successful delivery.

Indeed, Clarive allows you to:

  • Implement Lean principles and accurate measurement and reporting with real-time and end-to-end insight

  • Implement effective and pragmatic automation of both delivery processes as well as delivery execution steps such as build, provision, test, and deploy.

  • Do all this from within the same product, so there is no need to use anything else to get the job done!! No real need to implement artefact repositories, workflow tools, or anything else, just Clarive will do!.

Of course, in case you have made investment in tooling already, Clarive will collaborate in a bi-directional way to get you started quickly. After all, DMAIC or other improvement cycles are cyclic and continual, so you can further refine or improve after you got started if you desire more simplicity…

This is an evolution I have seen many of our clients going through: they initially look at and start with Clarive because they have certain automation or orchestration needs. Then they find out they can do with Clarive what they did with Jenkins, and switch to Clarive, then they learn about Clarive’s CI repository and decide to eliminate Nexus. As Clarive has a powerful and integrated workflow automation capability, they realise they could also do without Jira and Bitbucket… and so on. It has saved companies effort and cost doing so.

In case you are interested in Clarive, download it for free here. See also some sample screenshots of the tool below.

Clarive tool

Clarive tool_screenshot

Clarive tool_screenshot_deploy package

Get an early start and try Clarive now. Get your custom cloud instance for free.

In today’s fast-moving world of DevOps, we need ARA more than ever to take control over complexity to deliver a great workflow that can unite teams at different speeds in the enterprise.

These slides go over why, when and how ARA may apply to your enterprise.


Get an early start and try Clarive now. Get your custom cloud instance for free.