We are glad to present our new branching model Git flow based available for Clarive 7.

This new Git flow enables you and your team to:

  • Revert features
  • Deploy different releases progressively to environment
  • Maintain simultaneous live releases
  • Isolate feature groups into different release branches.

This presentation explains the problem it solves and how it can improve your day-to-day workflow tracking changes and delivering applications.

 


Visit our documentation to learn more about the features of Clarive.



I deliberately stated the title in the “traditional” way. How to get control OVER teams.


You want to implement DevOps, so you are aware of the fact that when implementing DevOps, you need to adopt Lean and Agile principles along the way to success.

Implementing DevOps indeed not only implies the introduction of tooling to ensure end-to-end automation, but also a change in culture and different people and parts of the organization collaborate together.

What does Agile tell us?

Studying agile principles, you will have discovered that according to agile principles, you best build projects around motivated individuals/people. You should give them the environment and support they need and trust them to get the job done. Another principle is make teams self-organizing.
The best architectures, requirements, and designs emerge from self-organizing teams. Finally, you should strive for simplicity, as this is essential for flow, quality, and focus. If you are interested in further detail, I suggest you also take a look at the 12 principles behind the agile manifesto.

What does Lean tell us?

When reading about Lean, you probably have encountered the DMAIC improvement cycle. DMAIC, which is an acronym for Define, Measure, Analyze, Improve and Control, is a data-driven improvement cycle used for improving, optimizing and stabilizing (business) processes and designs. It is in fact the core tool used within Six Sigma projects. However, DMAIC is not exclusive to Six Sigma and can be used as the framework for other improvement applications.

Since in this blog I am talking about control, let’s elaborate some more on the “C” in DMAIC.
The focus in the “C” is on how do you sustain any improvements/changes made? Teams can introduce changes/improvements, but they must ensure that the process maintains the expected gains. In the DMAIC control phase, the team is focused on creating a so-called monitoring plan to continue measuring the success of the updated process.

What to remember?

From the above, and transposing this in the context of DevOps and Application Delivery, the 3 things to get better control to me are:

  • Build cross functional teams of motivated individuals, people willing to go for customer value.
  • Give those teams an environment and the support they need to get the job done. To maximize flow, remaining customer focused, and striving for quality, ensure the environment is a simple as possible.
  • Make sure it is easy for teams to monitor and measure delivery performance towards “success”.

Interestingly enough, you will not find any direct guidance in the above on how to gain control OVER teams or people, because it is against fundamental Lean and Agile principles. As a manager, you can “control” more or less by helping shape or define the WHAT, the objective, the result, the definition of success, but it is up to the team to define and decide on the “HOW”, the way to get to success or the objective. The reason I write “more or less” is because success is mainly defined by the customer, the consumer of what is being delivered, not necessarily the internal manager.

Now let’s drill a little deeper into the support a team need to get (self-)control. We mentioned a simple environment and a way to monitor and measure.

In the context of application delivery, this translates into the application delivery end-to-end toolchain environment and the way delivery activities can be monitored and measured within this environment.

Very often when speaking to customers I am hearing about toolchain environments similar to the one in the picture below:

Code-Deploy-Track by clarive

I typically see different tools used for different aspects in the delivery process, often not or hardly linked to one another. Many times, I even see multiple tools deployed within the same phase of delivery (as shown above).

Why is that? I have witnessed multiple reasons why companies have seen their delivery environment grow over the years, the most common ones being:

  • Different platforms requiring different tooling for coding and/or deploying
  • Through acquisition, different environments have been inherited, and as a result multiple tools became part of the merged organization. To avoid too much change, environments have been left untouched.
  • Companies have given their delivery teams autonomy/flexibility without proper guidance. At first sight, giving power to teams is aligned with the proposed principles, but if this is done without overall architecture or in silo, then this can lead to suboptimal conditions.

The biggest issue for organizations providing a delivery environment similar to the one in the picture above is that tracking (the end-to-end monitoring and measuring of the delivery process) becomes a real nightmare.

According to Lean principles one should monitor and measure the delivery value stream. This value stream is customer centric, so it crosses delivery phase and tooling boundaries. If measurement data is spread over 30+ tools, then monitoring performance and obtaining insight at real-time becomes a real challenge.

How to become successful?

Clarive has been designed and developed with the above in mind. The Clarive DevOps platform aims for simplicity and ultimate operational insight.

Real-time monitoring and measurement is achieved by strong automation and integration. Automating the delivery process implies automation of related process flows (such as demand, coding, testing, defect, and support flows) combined with the automation of delivery related activities (such as build, test, provision, and deploy). Clarive is the only tool that allows you to automate both within the same tool. This is what gives you simplicity! No need for multiple tools to get the job done. As a result, all measurement data is captured within a single tool, which gives you real-time end-to-end data across the delivery value chain. This is exactly what teams need to control their process and what organizations need to control/understand the overall process.

But reality is that significant investment might have been done in certain delivery areas (tool configurations or script/workflow automations) already, something the business will not easily allow to be thrown overboard as this then will be seen as “waste”. Clarive addresses this with its strong bi-directional integration capabilities allowing organizations to re-use existing investment and treat simplification as part of the improvement cycle.

Clarive enables teams and companies as a result to gain insight and control over their end-to-end delivery processes in very limited time.

Below are some sample screenshots of how Clarive provides powerful Kanban as well as real-time monitoring insight and control.

Kanban as well as real-time monitoring insight and control

assigned_swimlane by clarive


Get an early start and try Clarive now. Install your 30-day trial here.



In this video we will deploy Mainframe Cobol application with Clarive on the z/OS mainframe.


This will be done in a continuous way, meaning, Clarive will trigger the event of a developer pushing a revision to a GIT repository.
Event triggering invoked an event rule in Clarive.

That Rule searches the Clarive database for a Changeset with Status “In Dev”. If one is found, the newly pushed revision will be related to it.

If none found, a new changeset for the application will be created with the revision related. The changes to the sources for that revision can be seen from the Clarive UI.

Also a job will start to compile and Link the cobol sources on the mainframe.

Clarive traps the JES Spool output and makes it available in its UI. The job is generated by a versioned pipeline rule. In all of the video’s for all of the technologies(WAR, CA-Endevor package, .Net, Mobile apps) the same rule is invoked for all of the environments(DEV, QA, PreProd, PROD).

For any of the environments the application needs to be deployed to, the same pipeline rule will be used, assuring a very consistent way of deploying.

In a next video, the changeset with the mainframe Cobol revision will be related to a Release and together with changesets with other technologies(CA-Endevor package, WAR, .NET, mobile,…) deployed in the production environment, again using the same pipeline rule to generate the job.

This means that a single job will deploy multiple technologies to multiple platforms.


Visit our documentation to learn more about the features of Clarive.



In this video we will build and deploy a WAR file with Clarive to a Tomcat webserver.


This will be done in a continuous way, meaning, Clarive will trigger the event of a developer pushing a revision to a GIT repository.

Event triggering invokes an event rule in Clarive. That Rule searches the Clarive database for a Changeset with Status “In Dev”.

If one is found, the newly pushed revision will be related to it. If none found, a new changeset for the application will be created with the revision related.

The changes to the sources for that revision can be seen from the Clarive UI. Also a job will start to build and deploy the WAR file.

The job is generated by a versioned pipeline rule. In all of the video’s for all of the technologies (mainframe cobol, CA-Endevor package, .Net, Mobile apps) the same rule is invoked for all of the environments (DEV, QA, PreProd, PROD).

The WAR file will be deployed to a Docker container in the DEV environment, to a Amazon instance in the QA environment and to a tomcat server on Premise for the production environment.

For any of the environments the application needs to be deployed to, the same pipeline rule will be used, assuring a very consistent way of deploying.

In a next video, the changeset with the War file will be related to a Release and together with changesets with other technologies(CA-Endevor package, Mainframe Cobol, .NET, mobile,…) deployed in the production environment, again using the same pipeline rule to generate the job. This means that a single job will deploy multiple technologies to multiple platforms.


Get an early start and try Clarive now. Install your 30-day trial here.



In this video we will deploy CA-Endevor packages with Clarive on the z/OS mainframe


Not only deployment of CA-Endevor packages is covered but also the realtime integration beween Clarive and CA-Endevor. With that integration the content of packages can be visualised in Clarive, CA-Endevor reports can be ran agains the elements of a package, What changes are made in the sources can be seen, relationship with other elements can be tracked….etc

The approved packages(casted on the mainframe) become available for dragging and dropping in a Clarive Changeset and will be deployed to environments as QA, PreProd, and PROD using a pipeline rule to generate the deployment job.
For any of the environments the package needs to be deployed to, the same pipeline rule will be used, assuring a very consistent way of deploying.

In a next video, the changeset with the CA-endevor package(s) will be related to a Release and together with changesets with other technologies(WAR, .NET, mobile,…) deployed in the production environment, again using the same pipeline rule to generate the job.


Visit our documentation to learn more about the features of Clarive.


We’re pleased to present our new release Clarive 7.0.12. This release contains a variety of minor fixes and improvements from 7.0.11. It is focused on refactoring interface.

NPM Artifact Repository management

Clarive team is proud to release this version with artifact repository enhancement. This new functionality allows NPM packages management.

  • Now is possible to surf the NPM repository folders through the artifacts interface, content visualization and distinguishing the new packages that have been included in the repository.

Create artifacts tags in order to sort them out

  • Use Clarive NPM repositories that serve as proxy to the global NPM store in npmjs.org or just use them as local, so you can control which public packages are available for your developers
npm install angularjs --registry http(s):///artifacts/repo/
  • Use Clarive Groups of repositories to categorize packages and access several local repositories with just one registry
npm install angularjs --registry http(s)://<clarive_url>/artifacts/repo/<npm_repo_group>
  • Directly publish in Clarive NPM repositories with npm publish command
npm publish ./ --registry http(s):///artifacts/repo/
  • You can also publish packages through rulebooks
do:
- publish:
repository: '' # repository name
from: ''
to: ''

Take a look to our docs website and learn how to configure your artifact repository in Clarive

NPM repository events exists in Clarive. So, for example, when the *npm publish* command is executed in one repository, the artifact will be published in Clarive, sending a notification email to your team. For more information go to our documentation and learn all you can do with events.

Improvements and issues resolved

  • [ENH] – Project menu revamp
  • [ENH] – Plugins code structure and formating
  • [ENH] – Owner can cancel and restart jobs
  • [ENH] – Interface plugins standarization
  • [FIX] – Docker images cache management
  • [FIX] – Show subtask editable grid only during edition
  • [FIX] – Differentiate environments and variables in menu

Ready to upgrade?

Just follow the standard procedure for installing the new version. Click here to get it from our Install page.

Acknowledgements

Join us in our Community to make suggestions and report bugs.

Thanks to everyone who participated there.


Get an early start and try Clarive now. Install your 30-day trial here.



This video shows how .net applications can be deployed with Clarive


For this deployment a SINGLE pipeline is used to deploy to the DEV and QA environment.
Assuring a consistent way of deploying.

In a next video this changeset will be related to a Release and deployed together with a mainframe application change and a Java application change into production. Again with the SAME pipeline.


Visit our documentation to learn more about the features of Clarive.



Check out how to get started with a complete Lambda delivery lifecycle in this blog post


Today we’ll take a look at how to deploy a Lambda function to AWS with Clarive 7.1.

In this example there is also some interesting ideas that you can implement in your .clarive.yml files to manage your application deployments, such as variables that can be parsed by Clarive.

Setup

Add the following items to your Clarive instance:

  • A Slack Incoming Webhook pointing to your Slack account defined webhook URL (check https://api.slack.com/incoming-webhooks)

  • 2 variables with your aws credentials : aws_key (type: text) and aws_secret (type: secret)

Slack is actually not mandatory to run this example, so you can just skip it. You can also just hardcode variables into the .clarive.yml file but you would be missing some of Clarive’s nicest features: variable management 😉

Create your 2 AWS variables in Clarive

Head over to the Admin Variables menu to setup the aws_key and aws_secret variables:

serverless aws clarive lambda variables

Setup your AWS Lambda credentials with Clarive variables

As the variable type field indicates, secret variables are encrypted into the Clarive database.

serverless clarive aws lambda secret var aws_secret

Create a secret variable to store your AWS credentials

.clarive directory contents

As you can see in the following .clarive.yml file, we’ll be using a rulebook operation to parse the contents of a file:

  - aws_vars = parse:
      file: "{{ ctx.job('project') }}/{{ ctx.job('repository') }}/.clarive/vars.yml"

In this case we’ll load a file called vars.yml from the .clarive directory in your repository and the variables will be available in the aws_vars structure for later use, i.e. {{ aws_vars.region }}.

If you have a look at that directory, there is one vars.yml for each environment. Clarive will use the correct file depending on the target environment of the deployment job.

The .clarive.yml file

The .clarive.yml file in your project’s repository is used to define the pipeline rule that will execute during CI/CD, building and deploying your Lambda function to AWS.

This pipeline rule will:

  • replace variables in the Serverless repository files with contents stored in Clarive

  • run the serverless command to build and deploy your Lambda function

  • notify users in a Slack channel with info from the version and branch being built/deployed

Slack plugin operation in use

For posting updates of our rule execution to your Slack chat, we’ll use the slack_post operation available in our slack plugin here. With Clarive’s templating features we’ll be able to generate a more self-descriptive Slack message (also called a payload):

  - text =: |
      Version:
         {{ ctx.job('change_version') }}
      Branch:
         {{ ctx.job('branch') }}
      User:
         {{ ctx.job('user') }}
      Items modified:
         {{ ctx.job('items').map(function(item){ return '- (' + `${item.status}` + ') ' + `${item.item}`}).join('\n') }}
  - slack_post:
      webhook: SlackIncomingWebhook-1
      payload:
         attachments:
           - title: "Starting deployment {{ ctx.job('name') }} for project {{ ctx.job('project') }}"
             text: "{{ text }}"
             mrkdwn_in: ["text"]

You can play around with it to experiment with different formats and adding or removing contents to the payload at will.

Replacing variables in your source code

We use the sed operation in the build step:

  - sed [Replace variables]:
      path: "{{ ctx.job('project') }}/{{ ctx.job('repository') }}"
      excludes:
        - \.clarive
        - \.git
        - \.serverless

This will parse all files in the specified path: and replace all {{}} and ${} variables found. You can find a couple of examples in the handler.js file in the repository.

Docker image

Our rule uses an image from https://hub.docker.com that has the Serverless framework already installed:

  - image:
      name: laardee/serverless
      environment:
         AWS_ACCESS_KEY_ID: "{{ ctx.var('aws_key') }}"
         AWS_SECRET_ACCESS_KEY: "{{ ctx.var('aws_secret') }}"

See that we set the environment variables needed for the Serverless commands to point to the correct AWS account.

Operation decorators

Some of the operations you can find in the sample .clarive.yml file are using decorators such as: [Test deployed application].

Decorators are used in the job log inside Clarive instead of the name of the operation. It makes it easier for the user to read the job log if operations have textual information describing what it is doing. This is specially true with longer pipelines and complex rules.

clarive serverless pipeline message decorator

Clarive job log message decorator in action

You can actually also use variables in decorators to make them more intuitive for the user!

The full .clarive.yml file is available on our Github instance:

https://github.com/clarive/example-app-serverless/blob/master/.clarive.yml

Building and deploying your Serverless app

  • First, use this Github repository cloned into a new or an existing project (i.e project: serverless, repository: serverless)

Clone the new clarive repository from your git client:

git clone http[s]://<your_clarive_instance_URL>/git/serverless/serverless

Create a new topic branch. Here we’ll be tying our branch to a User Story in Clarive:

cd serverless
git branch -b story/to_github

Now commit and push some changes to the remote repository and go to Clarive monitor. It should have created a new user story topic for you and automatically launched the CI build:

serverless clarive build deploy ci cd job

Serverless CI/CD job with Clarive

From here forward, you can start the build and deploy lifecycle, including deploying to other environments (ie. Production or QA) and other deployment workflow. Just setup different variable values for each environment, so that the CI/CD pipeline will deploy to the corresponding environment when the time comes.

Enjoy!!!


Get an early start and try Clarive now. Install your 30-day trial here.