This release contains a lot of minor fixes and improvements from 7.0.12. It is also focus on refactoring interface improving the kanban boards.

Git repositories navigation on a tab

In Clarive 7.0.13 you can find a new Git repository navigation panel completely refactored. You can view sources, navigate branches and tags, compare references and much more.

To access the new interface, just navigate to the project in the left panel, expand it and click on the repository node.

Repository Navigation

Load default data by profile

Now any Clarive profile (a profile is a predefined set of topic categories, rules and roles that can be loaded in Clarive) can include default data as part of it.

ClariveSE profile now includes a sample-html project and two releases with several changes on them. It also automates the launch of 3 deployment jobs to INTE, TEST, and PROD.

To get the profile and the default sample data installed, execute cla setup <profile> and answer yes to the question Load default data?. Once you start the Clarive server it will automatically load the profile and the default data.

Notes

Kanban Board improvements

Custom card layout

You can now configure the layout of the cards of your Kanban Boards to show the information that you really want to focus on. To configure the layout, go to the board Configuration and select Cards Layout.

Cards Layout

Auto refresh

In the Quick View options panel (click on View button), now you’ll find a switch to toggle the Auto Refresh for this board. It will be updated with changes in the topics shown whenever the board tab is activated.

Auto refresh

Save quick view by user

In Clarive 7.0.13 the options selected in the quick view menu will be saved locally in your browser storage so every time you open the board it will use the last swimlanes, autorefresh, cards per list, etc. configuration you used.

Predefined statuses by list

Whenever you create a new board, it will be created with three default lists and now it will assign default statuses to these three lists with the following rules:

  • New: Initial statuses
  • In Progress: Normal statuses
  • Done: Final and Cancelled statuses

Killtree when job is cancelled

One of the most important improvements of Clarive 7.0.13 is the ability to kill/cancel the remote processes being executed by a job when this is canceled from the interface.

Auto Important

You can read about this new feature in this blog post

Improvements and issues resolved

  • [ENH] Git repositories navigation on a tab
  • [ENH] Clax libuv adaptation
  • [ENH] NPM registry directory new structure
  • [ENH] Add rulebook documentation to service.artifacts.publish
  • [ENH] Return artifact url on publish
  • [ENH] Invite users to Clarive
  • [ENH] Load default data by profile
  • [ENH] Users can choose shell runner for rulebooks
  • [ENH] Kill job signal configured in yml file
  • [ENH] Add default workers configuration to clarive.yml file
  • [ENH] Boards shared with “ALL” users
  • [ENH] Kanban custom card fields
  • [ENH] Killtree when job is cancelled
  • [ENH] Kanabn boards auto refresh
  • [ENH] Make sure to save kanban quick view session
  • [ENH] Filter data according to filter field in Topic Selector fieldlet
  • [ENH] Make sure new created boards have default lists
  • [ENH] Add date fields to card layout configuration
  • [FIX] Check user permissions in service.topic.remove_file
  • [FIX] Make sure user with permissions can access to rule designer
  • [FIX] Make sure CI permissions are working correctly
  • [FIX] Make sure that the ci grid is updated after the ci is modified
  • [FIX] Control exception when running scripts.
  • [FIX] Change project_security structure on user ci
  • [FIX] User without project field permissions can edit the topic
  • [FIX] Make sure React apps work in IE 11
  • [FIX] Show cis in create menu (standard edition)
  • [FIX] Administrator should be able to delete artifacts in ClariveSE
  • [FIX] When publishing NPM packages with scopes tarball is empty
  • [FIX] Make sure default values from variables are used when adding them
  • [FIX] Make sure notifications are sent only to active users
  • [FIX] Make sure to show username in “Blame by time” option for rules versions
  • [FIX] Remove default values when changing type of variable resource
  • [FIX] Allow single mode in variables resources
  • [FIX] Escape “/” in URLs for NPM scoped packages from remote repositories
  • [FIX] Avoid console message when opening a variable resource with cis set as default values
  • [FIX] Regexp for scoped packages should filter ONLY packages, not tgzs
  • [FIX] Refresh resources from url
  • [FIX] Create resource from versioned tab
  • [FIX] Make sure remote script element always display a final message
  • [FIX] Save variable when deleted default value field in a variable resource
  • [FIX] Make sure topic’s hidden fields are available as topicfields bounds
  • [FIX] Save resource when it does not have to validate fields
  • [FIX] Make sure projects can be added as kanban swimlanes
  • [FIX] Make sure changeset with artifact revision attached can be opened
  • [FIX] Make sure narrow menu repository navigation show changes related to branch
  • [FIX] Formating event data if fail service used
  • [FIX] Make sure that the chosen element is always selected in the rule tree.
  • [FIX] Reload data resource when refreshing
  • [FIX] Job distribution and las jobs dashlets should filter assigned projects to user
  • [FIX] Make sure user combo not have avaible grid mode in topic.
  • [FIX] Make sure that system user are showed in combo users
  • [FIX] Display column data in edition mode for a Topic Selector fieldlet in a topic
  • [FIX] Filter projects in grids by user security
  • [FIX] Make sure in topic selector combo all height size are available
  • [FIX] Ship remote file: show log in several lines
  • [FIX] Skip job dir removal in rollback
  • [FIX] Remove FilesysRepo Resource
  • [FIX] Remove permissions option from user menu
  • [FIX] Make sure when maximized description and choose back in the browser screen layout are showed well
  • [FIX] Remove session when user get deactivated
  • [FIX] Resources concurrency
  • [FIX] Validate CI Multiple option just with type ci variables
  • [FIX] Resource not saved when validation fails
  • [FIX] Make sure that the combos search has an optimal performance.
  • [FIX] Make sure ldap authentication returned messages are available in stash
  • [FIX] Show date and time in fieldlet datetime
  • [FIX] User session should not be removed on REPL open
  • [FIX] User with action.admin.users should be able to edit users
  • [FIX] Make username available in dashboard rules execution
  • [FIX] Make sure collapsing lists saved in user session correctly

Ready to upgrade?

Just follow the standard procedure for installing the new version. Click here to get it from our Install page.

Acknowledgments

Join us in our Community to make suggestions and report bugs.

Thanks to everyone who participated there.


Try Clarive now and start improving your DevOps practices.


Clarive 7.0.13 introduces a new feature that allows remote jobs to be killed when a pipeline job is cancelled.

Normally, pipeline job cancelation will only end processes local to the Clarive server and keep remote processes running. This was working as designed, as we did not intend to nuke remote processes inadvertently.

This is an interesting subject that we think could be of use within or outside the scope of Clarive, and may be useful if you’re wondering how to interrupt job pipelines while they’re running, or killing scripts running remote processes.

killing a remote process tree

Why remote processes

Pipeline job remote execution starts remote processes using one of our 3 communication agents/transports: SSH, ClaX (lightweight push agent) and ClaW (lightweight pull-worker). This article is specific about the SSH transport, as it’s more generic, but it applies also to ClaX and ClaW.

When a pipeline kicks off a remote job, Clarive connects to a remote server and starts the command requested. The connection between the Clarive server and the remote machine blocks (unless in parallel mode) and remains blocked for the duration of the remote command.

Here’s a rulebook pipeline example:

do:
   shell:
     host: user@remserver
     cmd: sleep 30

The above example will block wait 30 seconds for the remote sleep command to finish.

During the execution of the command, if we go to the remote machine and do a ps -ef, this is what we’d find:

 user  12042 12012  0 07:47 ?        00:00:00 sshd: user@notty
 user  12043 12042  0 07:47 ?        00:00:00 sleep 30

Most remote execution engines do not track and kill remote processes. The issue of killing the remote processes and giving the user feedback process (or UI) is present in DevOps tools from Ansible to Gitlab to many others.

 https://gitlab.com/gitlab-org/gitlab-ce/issues/18909
killing a remote process tree

Currently killing a job will not stop remote processes

Killing the remote parent process

Before this release, canceling a job would end the local process and

But you can do the same from the Clarive server with the SSH client command ssh:

 clarive@claserver $ ssh user@remserver sleep 30
 Killed: 9

Now if we killed the server process – with Clarive’s job cancel command or with a simple Ctrl-C or even a kill -9 [pid] through the SSH client:

 clarive@claserver $ ssh user@remserver sleep 30
 Killed: 9

That typically does not work, as the children processes will remain alive and become children of the init process process id 1. This would be the result on the remote server after the local process is killed or the Clarive job canceled:

 user  12043     1  0 07:47 ?        00:00:00 sleep 30

The sshd server process that was overseeing the execution of the remote command terminates. That’s because the socket connection has been interrupted. But the remote command is still running.

Pseudo-TTY

A way to interrupt the remote command could be the use of the ssh -t option. The -t tells the SSH client to create a pseudo-TTY, which basically means tells SSH to make the local terminal a mirror of what a remote terminal would be, instead of just running a command.

If have never used it, give it a try:

$ ssh -t user@remserver vim /tmp/

It will open vim locally as if you had a terminal open on the remote machine.

Now if you try to kill a process started with -t using Ctrl-C, the remote sshd process will terminate the children process as well, just like when you hit Ctrl-C with a local process.

$ ssh -t user@remserver sleep 30
^C
Connection to remserver closed.

No remote processes remain alive after the kill, and sleep 30 disappears on remserver.

However, this technique does not solve our problem, due to the fact that pipeline jobs are not interactive, so we cannot tell the ssh channel to send a remote kill just by setting up a pseudo-tty. The kill signal will only impact locally and on the remote sshd and not be interpreted as a user manually hitting the Ctrl-C key.

The solution: tracking and pkill

The way to correctly stop remote processes when pipeline jobs are cancelled is to do it in a controlled fashion:

1) Clarive job process starts remote command and keeps the connection open

2) Clarive job is canceled (by the user normally, through the job monitor)

3) Clarive creates a new connection to all servers where commands are being executed

4) A pkill -[signal] -p $PPID command is sent through the same sshd tunnel

5) The pkill will kill the parent remote sshd process and all it’s children, also called the process tree

That way all the remote processes are stopped with the job cancel.

killing a remote process tree

Successfully killing remote processes will kill the full remote tree

Picking a signal

Additionally, we’ve introduced control over the local and remote signals to send to end the processes. You may be interested in sending a more stern kill -9 or just a nice kill -15 to the remote process.

Clarive will not wait for the remote process to finish since, as we have witnessed many times, certain shutdown procedures may take forever to finish, but it does have a timeout on the local job process that are running and who may be waiting for the remote process to finish.

The following config/[yourconfig].yml file options are available:

# kill signal used to cancel job processes
# - 9 if you want the process to stop immediately
# - 2 or 15 if you want the process to stop normally
kill_signal: 15

# 1|0 - if you want to kill the job children processes as well
kill_job_children: 1
# signal that will be sent to remote children
kill_children_signal: 15

# seconds to wait for killed job child processes to be ripped
kill_job_children_timeout: 30

Why killing remote processes is important

When we get down to business, DevOps is as much about running processes on remote servers, cloud infrastructure and containers as it is about creating a culture that promotes a do-IT-yourself empowered culture.

If you are building DevOps pipelines and general remote process execution and want to stop it midway through for whatever reason, it’s important to have a resilient process tree that is tracked and can be killed when requested by the master process.

Happy scripting!


Get an early start and try Clarive now. Install your 30-day trial here.


Elixir logo

Environment provisioning is a key part of a continuous delivery process. The idea is simple: we should not only build, test and deploy application code, but also the underlying application environment.

Are your environments being provisioned on-demand as applications deploy? Can Devs request new environments to fit changes in how the application is built? Is environment configuration and modeling built into the application deployment process?

What is the “environment”

The application environment consists in 3 main areas:

  • Infrastructure

  • Configuration

  • Dependencies

Infrastructure is the most important element of the environment, as it defines where the application will run, the specific configuration needs and how dependencies need to interact with the application.

Configuration is the next most important aspect of the application environment. Configuration dictates both how the application behaves in a given infrastructure and how the infrastructure behaves in relation to the underlying application.

Dependencies are all the different modules or systems an application dependes on, from libraries to services or other applications.

What is infrastructure today?

The concept of infrastructure refers to the components, hardware, and software, needed to operate an application service or system. But, as hardware has been abstracted away in favor of scalable, reliable and affordable solutions, the true definition of infrastructure is something different to every application.

We could say infrastructure now advances almost as fast as application technologies and languages themselves. Infrastructure has melded with the application in such a way that now we have to basically pick the infrastructure as part of the architecture decisions.

Infrastructure has never stopped evolving:

  • virtualization, or how to provision infrastructure in minutes instead of days
  • containerization, or how to provision infrastructure in seconds instead of minutes
  • the cloud, or how to provision infrastructure you do not own
  • serverless, or how not to provision infrastructure as it will provision itself on demand
  • and so on…

And the IT infrastructure is not just vertical, but also horizontal, as platforms can also connect many services pods and execution silos.

How about configuration and dependencies?

Configuration and dependencies are very profound topics that deserve their own articles. But let’s say that today both tend to be containerized one way or the other, meaning both programming languages and infrastructure technologies promote the packaging of environment configuration and dependencies as part of the deliverable.

Business Challenges

For any organization to successfully test and release new applications or application versions, the appropriate environment(s) needs first to be in place. The enterprise follows many different procedures for supplying environments to applications. It may be manual or automated, or a combination of both.

It also may be in the hands of different teams or roles within the enterprise, from developers to operations and sysadmins.

Environment provisioning is how an organization manages infrastructure before, during and after the lifespan of an application or service. Environment provisioning is independent of Services Oriented Architecture (SOA), micro-services or a full MVC application, or any other execution structure as running services or application all need to

  • Before: setup development and unit-testing environments and basic SCM controls. Provision environments as the delivery flow advances.

  • During: applications change with new requirements or bugs, and they may need to scale current or provision new features of its infrastructure as they go.

  • After: when done, decommissioning environments is important to deallocate valuable resources.

Business Benefits

Now why would you want to automate environment provisioning? Or yet, how would you demonstrate that spending time automating and building provisioning into continuous deployment can be beneficial at your organization?

Reduce Average Time to Provision

This KPI offers a direct measurement of the end-to-end process for provisioning new infrastructure, including both physical and virtual machines, processing power, storage, databases, middleware, communication and networking infrastructure among others. IT managers and business users can employ this metric as an indication of the degree to which IT is supporting the ability needs of the business.

Reduce Service Complexity

Centralizing all environment automation operations in one place simplifies making decisions that affect the business, permitting a quick turnover of scalability investments and impact analysis and change execution when rearranging infrastructure resources to meet new business requirements.

Get Service Allocation Insights

Measure how to and for how long infrastructure is being used, how fast it’s being delivered and disposed of and what business requirements are behind each request.

Greater service efficiency and decreased costs

Provisioning and disposing of environments following establish patterns and templates help reduce waste and complexity. Modular environments are also easier to debug and scale as changes only impact in one application instead of a cluster of applications. The bottomline is that modular environment reduce the costs of running tests and are easier to throttle in production, which also translate in applications that only consume what they need at a given time.

Use Cases

Here a sample of use cases that triggers an organization to implement a provisioning solution:

  • Onboarding new application teams have to deal with organizational complexity and adhere to standards and release policies. Provisioning a baseline environment catalog will help circumvent organizational obstacles and technological challenges.

  • Coordinating code releasing and provisioning to make sure infrastructure changes arrive just-in-time with the code that requires it.

  • Dev and QA environment provisioning, to simplify and control the process of procuring and automating environment generation.

  • Self-service provisioning, offering users a “catalog” with a set of service requests and tasks that can be launched and managed.

  • Infrastructure code is being pushed and continuously deployed by developers (ie. Dockerfiles or Chef recipes), but require more control and end-to-end visibility.

Get started today

If you feel you’re not taking full advantage of environment provisioning, here’s a few ideas for you to get started ASAP:

  • When starting new apps, or designing architecture, try to predict how environments will be provisioned at different stages during the delivery pipeline, specially QA and Production.

  • Build apps that can have features tested on-demand by spinning throw-away environments.

  • Have your DevOps processes (ie. scripts, builds) run in containers too.

  • Add environment provisioning to your CI/CD pipeline.

  • Offer users a way to control and define the environment rules and infrastructure needs, as code if possible.

In general, containers are a great way to go thanks to it’s infrastructure as code nature and natural DevOps fit. Serverless on the other hand can abstract away many environment concerns and make better use of resources as applications grow.


Let Clarive be your best friend in your DevOps journey to continuous delivery. Start with your 30-day trial now.


Elixir logo

As you can see in our previous posts, having a complete pipeline to introduce DevOps in your day-to-day live is easy with Clarive’s Rulebook.

You just have to follow these three simple steps:
1. Get your 30-day trial
2. Upload your code to your Clarive Project repository.
3. Prepare your rulebook, push your commit and enjoy! (oops, maybe four steps would have been better :))

So let’s get down to business: we will detail the needed code.

Defining our variables

First, we declare the variables that will be used throughout our pipeline process.

vars:
  - workspace: "${project}/${repository}"
  - server:  https://<my-clarive>.clarive.io
  - art_path: ${server}/artifacts/repo/${project}

Building our application

In this step, we choose Elixir Docker image using Mix as a building tool.

build:
  do:
    - image:
         name: 'elixir'
         runner: 'bash'

     - shell [Compile application]: |
         cd {{ workspace }}
         mix compile
         tar cvf ${project}_${job}.tar _build/dev/lib/

And publish the compiled application to our artifact repository.

    - artifact_repo = publish:
        repository: Public
        to: '${art_path}'
        from: '{{ workspace }}/${project}_${job}.tar'
    - log:
        level: info
        msg: Application build finished

Ready to test

As long as we have our own application tests, this step is as simple as running the right command.

test:
  do:
    - image:
         name: 'elixir'
         runner: 'bash'

     - shell: |
         cd {{ workspace }}
         mix test

Deploy wherever we want

Now, it’s time to choose where our app will run. For example, send the tar file to another server and run the app.

deploy:
  do:
   - ship:
       from: '${art_path}/${project}_${job}.tar'
       to: /tmp/remotepath/
       host: ${remote_server}
  - shell:
      cd /tmp/remotepath/
      tar -xvf ${project}_${job}.tar
      mix run

This remote_server could be an AWS instance in PROD environment or another Docker container just to QA.

Happy Ending

Now with our .yml file already prepared, we can use Clarive’s interface to visualize the steps that will follow the finalizing of the rulebook.
To start the deployment, we only need to perform the push on the repository, just as we have seen before in other posts. When performing the push, Clarive automatically creates a deployment (Continuous Integration) and executes all the code found in the .clarive.yml.


Visit Clarive documentation to learn more about the features of this tool.



Increase quality and security of node.js applications which rely on NPM packages.


Developers often create small building blocks of code that solve one particular problem and then “package” this code into a local library following NPM guidelines. A typical application, such as a website, often consists of dozens or hundreds such small node.js packages. Development teams often use these packages to compose larger custom solutions.

While NPM allow teams to exploit the expertise of people who have focused on a particular problem area, residing either inside or outside the local organization, and support teams work together better, sharing talent across projects, we often see companies struggling with the quality of packages being used and as a result looking for ways to control usage better.

Better ways to manage and control what packages are being deployed in their cloud and/or their data centers is vital.

Organizations want to reduce the risk of failure or instability resulting from downloads of the latest version of a required NPM package from the internet, potentially improperly tested.

This video shows how Clarive can help you, making your node.js applications that use NPM packages more secure and stable.


Start creating NPM packages with Clarive. Get your 30-day trial now.


In this video we can see how Clarive checks the availability of certain versions of applications (BankingWeb 2.1.0 and ComplexApp 1.2.0) needed in an environment before deploying a version of another application (ClientApp 1.1.0).

We could consider the applications(BankingWeb and ComplexApp) as pre-requisites for the actual application (ClientApp) for which the deployment is requested.

When the required applications versions (BankingWeb 2.1.0 and ComplexApp 1.2.0) are not in the target environment yet, Clarive will block the deployment until the applications are either deployed first (in a separate job) or added to the same deployment job as the application requiring the pre-requisite applications.


Start your DevOps journey to continuous delivery with Clarive now. Ge your 30-day trial here.


Continuous integration (CI) is now becoming a standard in all software projects. With this, a pipeline is executed for each new feature or change in code that is made, and this allows you to carry out a series of steps to ensure that nothing has been “broken”.

One of the main steps that needs to be undertaken in the CI is the implementation of tests, both unit tests and regression tests. The latter can be done in the nightly build that Clarive is responsible for launching on a daily basis, at the time that we pre-define within our Releases.

Unit tests must be run whenever there has been any change in the code. This brings us to our first step. The unit tests must be run with each change in source code using a transfer (Continuous Integration) and the tests will be launched in the nightly builds of each new Release. This will ensure that the new version of our product will go onto the market without any bugs or problems in the features that worked well in the current versions.

At the end of this post the reader should know how to:
Integrate tests into a Clarive rulebook.
Publish files in an artifact repository.
– Send notifications to users.

Ready to start

To start this step-by-step guide you need a copy of Clarive installed on your server (you can request a 30-day trial).

Overview

For this post, we will use the following workflow:

1- User commit tests.
2- Clarive runs rulebook.
3- Mocha run tests.
4- Clarive post a message in Slack with the report.

For the first part of the development, we will assume that the developer has already written his tests. In this example we will use a simple “Hello world” and some example tests that you can find here and which will use the library expect.js, which will be installed during the running of the rulebook.

Workspace files:

In this example within our git repository we have the following file structure:

├── .clarive.yml
├── package.json
├── src
│   ├── add.js
│   ├── cli.js
│   ├── divide.js
│   ├── helloWorld.js
│   ├── multiply.js
│   ├── sort.js
│   └── subtract.js
└── test
    ├── helloWorld.tests.js
    ├── test-cli.tests.js
    └── test.tests.js

.clarive.yml: The rulebook file that Clarive runs
package.json: NPM configuration file. Within this file we have defined a command to run the tests that are found in the test folder:

"scripts": {
    "test": "mocha --reporter mochawesome --reporter-options reportDir=/clarive/,reportFilename=report,reportPageTitle=Results,inline=true"
  },

src: Folder where we find the source code.
tests: Folder where the test files are located.

Writing a rulebook

The rulebook will be responsible for running the tests and generating a report to notify the user of the results of the tests.

First phase: Running the tests

As we can see in the file skeleton clarive.yml, there is a step named TEST that we will use to run the tests:

test:
  do:
    - log:
        level: info
        msg: The app will be tested here

But first, let’s define some global variables:

vars:
  - workspace: "${project}/${repository}"
  - server:  https://<my-clarive>.clarive.io
  - art_path: ${server}/artifacts/repo/test-reports

We can see from the code that we are going to use node.js with mocha, chai and expectjs as libraries to run the tests. And thanks to Docker images, we can use a container with mocha and chai already installed, and this means that you only need to install expectjs using the rulebook.

We can now specify the Docker image we’re going to use during this TEST phase and then install the expectjs library from our working directory and, finally, update our package.json:

test:
  image: mocha-chai-report
  do:
    - shell:
        cmd: |
             cd ${workspace}
             npm install expect.js
             npm install

Next, we just need to run the tests. As we have already defined the command to run the tests in our json package, we only need to run npm test:

test:
  image: mocha-chai-report
  do:
    - shell:
        cmd: |
             cd ${workspace}
             npm install expect.js
             npm install
             npm test

Second phase: Publish report

When you finish running the tests, an HTML report will be generated with all the results, thanks to the mochawesome library installed in the Docker image. We will now publish this report in our artifact repository:

- publish_report = publish:
        repository: Test reports
        from: 'report.html'
        to: "test-reports/{{ ctx.job('name') }}/"
    - log:
        level: info
        msg: "Report generated successfully!"

Now, we are now going to complete the POST step. During this step we will inform the users that we want to complete the deployment.

Third phase: Notifications

We need to post the message in Slack, with a link to the report. To do this, we will have to have our WebHook configured in Slack and in Clarive:

Now all that remains is to add the sending of the message to the POST step:
From the Slack API any user can configure their own messages. In this case we have configured a message in which the user, on receiving the message, will be able to access the report from the same channel:

- slack_post:
          webhook: SlackIncomingWebhook-1
          text: ":loudspeaker: Job *{{ ctx.job('name') }}* finished."
          payload: {
              "attachments": [
              {
                "text": "Choose an action to do",
                "color": "#3AA3E3",
                "attachment_type": "default",
                "actions": [
                    {
                        "name": "report",
                        "text": "View test report :bar_chart:",
                        "type": "button",
                        "style": "primary",
                        "value": "report",
                        "url": "${art_path}/test-reports/{{ ctx.job('name') }}/report.html"
                    },
                    {
                        "name": "open",
                        "text": "Open Monitor in Clarive",
                        "type": "button",
                        "value": "open",
                        "url": "${server}/r/job/monitor"
                    }]
             }
           ]
        }

Lets make continuous integration!

Using Clarive, we can create a Story where we will attach all the code. If we haven’t already created a Project, we can create one using the Admin panel:

Then we create a repository of artifacts (a public one in this case):

Once we have carried out the previous steps we will create the Story in Clarive:

We’ll now change the state to “In Progress” and Clarive will automatically create a branch in the repository:

From the console we’ll clone the repository and change to the branch we’ve just created:

We’ll now place all our files in this branch and push them to the repository:

Clarive will automatically generate the deployment:

After finished, we will received the notification on the Slack channel that we’ve selected:

Finally, we can check test report in the artifact repository.

Next Steps:

In Clarive – After making this first contact with Clarive, our Story can be assigned to a Release (one that was previously created from the release menu) and, after changing the state to “Ready”, the developed feature will be automatically merged into the branch of therelease.

In Rulebook – Well, this is a simple example. The rulebook can be as complex as the user wants it to be, for example, by adding if statements, adding elements in the different deployments. For example, if something has gone wrong, create a new Issue in Clarive automatically, only run tests if the “tests/” folder has been modified in the commit…the possibilities are limitless!.


Visit our documentation to learn more about the features of Clarive.