We’re excited to introduce you to the new operation in
Clarive: Git Timesync available in the upcoming 7.10.6 release. This
operation is specifically designed to update the file timestamps in your job
directory to match their respective Git commit timestamps. Let’s dive in and
see how it works and how you can utilize it in your Clarive pipeline rule.

Motivation

When Git performs a checkout or clone operation, it intentionally does not update the timestamp of the files to reflect their last commit dates. Instead, all files receive the timestamp of when the clone or checkout occurred. The primary reason for this behavior is performance. Updating file timestamps to their respective commit dates would require Git to examine the entire commit history of each file, which can be computationally intensive and slow, especially for large repositories with extensive histories. By using the clone or checkout date as the timestamp, Git can quickly populate the working directory without any unnecessary overhead. Furthermore, maintaining consistent timestamps helps build systems, like make, determine if a file needs to be rebuilt, ensuring efficient and predictable builds.

What is Git Timesync?

Git Timesync is a Clarive rule operation that updates the timestamps of files in the job directory so they align with the most recent Git commit timestamps. However, files without a Git timestamp or those from other repositories that don’t match the job’s repository will remain unchanged.

Configuration Essentials

Before using Git Timesync, it’s crucial to understand its configuration:

  • Path: This is the relative directory within each Git repository to be processed. By default, it uses . which means all files controlled by Git.

  • Git Repositories: An optional list of Git Repositories that you wish to process. If you leave it blank, all Git Repositories included in the job where the rule is running will be processed.

Remember, you can use multiple Timesync operations if you’re dealing with different path and repository combinations.

Note: If you’re working with a sizable repository or one with an extensive commit history, Git Timesync is very optimized, but it may still take a while to process all the timestamps.

Dependencies

Ensure you execute Git Timesync after these operations:

  • Load Job Items into Stash
  • Checkout Job Items or another equivalent repository checkout.

How to Use Git Timesync in a Clarive Pipeline Rule?

  1. Open your desired Clarive pipeline rule.
  2. Navigate to the palette.
  3. Simply drag the Git Timesync operation from the palette and drop it into your rule after the aforementioned dependencies.
  4. Configure the Path and Git Repositories as per your requirements.
  5. Save your rule, deploy a changeset and you’re good to go!

Git Timesync is a handy operation for those who wish to synchronize file timestamps with their Git commit timestamps, ensuring consistency and clarity. So, next time you’re working on a Clarive project, give Git Timesync a try and let it handle the timestamp synchronization for you!

Happy clariving!

As a DevOps engineer, having the ability to securely and reliably execute commands, transfer files, and automate workflows across your infrastructure is absolutely essential. That’s why we’re excited to announce the release of ClaX – an open source remote deployment agent that makes this possible.

We understand the critical need for efficient tools that streamline remote deployment, file exchange, and command execution across a variety of platforms. That’s where ClaX comes into play. ClaX is a portable HTTP(s) remote deployment agent developed by Clarive, designed to empower DevOps professionals. Whether you deploy to Windows, Mac, or Linux servers, ClaX simplifies these tasks as an alternative to SSH or other more involved methods.

What is ClaX?

ClaX is more than just your run-of-the-mill remote agent. It’s a powerful and versatile tool that can run commands, exchange files, and handle more complex operations with ease. One of its standout features is its ability to read requests from stdin and write responses to stdout, making it a perfect fit for inetd integration.

ClaX is a lightweight HTTP-based agent that can be installed on Windows, Linux, and UNIX servers. It allows you to:

  • Run commands and scripts remotely
  • Upload and download files
  • Integrate with continuous deployment workflows

The ClaX agent exposes a REST API that can be called from any language or tool that supports HTTP requests. For security, the API uses SSL and access control via HTTP basic auth or client certificates.

Some example uses cases:

  • Deploying application code to servers
  • Running data migration scripts
  • Automating post-deployment checks
  • Collecting logs and artifacts
  • Synchronizing files across a cluster

ClaX handles executing commands asynchronously. It streams the stdout and stderr back in real-time, while also returning the exit code and execution status. For long running processes, you can even set a timeout.

Why Use ClaX?

There are other tools that can do remote execution and orchestration. However, ClaX has a few advantages:

Portable and self-contained – ClaX is a single binary with no dependencies. Just drop it on a server and it works, no complex installation required. It also makes it easy to upgrade by just replacing the executable.

Embeddable and composable – ClaX exposes a simple HTTP API that can be called from any language or toolkit. It can be embedded into custom scripts and applications.

Lightweight and low overhead – ClaX utilizes a low memory footprint, making it suitable for containers and cloud environments. The API is optimized for performance and scalability.

Cross platform – Tested on Linux, UNIX, Windows, and legacy systems like mainframes.

Platform Compatibility

At Clarive, we understand that DevOps environments are diverse, with various servers running different operating systems. That’s why ClaX has been rigorously tested and proven to work seamlessly on a wide range of platforms, including:

  • Debian GNU/Linux x86_64
  • FreeBSD 10.3
  • Mac OS 10.11+
  • Cygwin x86_64
  • Windows 2003, 2008, 2012+
  • Solaris 10 i86pc
  • z/OS 390
  • Raspbian ARMv7

This extensive platform support ensures that ClaX can be your go-to tool, regardless of your server’s operating system. Contact us if the ClaX binary you need is not available in the download page and we’ll generate it for you.

Why not just plain SSH?

ClaX can serve as a valuable alternative to using plain SSH, even though the Clarive server also supports SSH. Here are some advantages of using ClaX over SSH in specific scenarios:

  • Simplified Management: ClaX is designed as a remote deployment agent, which means it’s purpose-built for remote tasks. SSH, on the other hand, is a general-purpose remote access and administration tool. When you need to streamline specific tasks like running commands, exchanging files, or managing deployments, ClaX provides a more focused and simplified approach.

  • Security Features: ClaX offers a robust set of security features, including SSL support and basic authentication, ensuring secure communication and access control. While SSH is inherently secure, it might require additional configurations for specific use cases. ClaX simplifies this by providing security out of the box.

  • Platform Independence: ClaX is compatible with a wide range of platforms, including Windows, Mac, and various Linux distributions. SSH is primarily associated with Unix-like systems, making ClaX a more versatile choice for heterogeneous server environments. If you have mixed OS environments, ClaX can be a unifying solution.

  • Service Integration: ClaX can be run as a Windows service, allowing it to seamlessly integrate with Windows server environments. SSH does not function as a service in the same way, and while it can be used on Windows, ClaX simplifies the process of running tasks on Windows servers.

  • REST API: ClaX offers a REST API that simplifies automation and integration with other tools. SSH is a command-line tool and lacks the comprehensive API capabilities that ClaX provides. This API can be a game-changer when you need to automate deployment and management tasks.

  • Inetd Integration: ClaX’s ability to read requests from stdin and write responses to stdout makes it suitable for inetd integration. This is particularly useful when you want to streamline request handling in a more controlled manner. SSH, on the other hand, is not designed for this level of integration.

  • Configurability: ClaX’s INI-based configuration file makes it easy to customize and adapt to specific requirements. SSH typically relies on a more complex configuration file, which might be overkill for scenarios where simplicity is key.

In summary, ClaX is a specialized tool that excels in scenarios where you need a streamlined, secure, and platform-independent solution for remote deployment, file exchange, and command execution. While SSH is an essential and versatile tool for remote access and administration, it may require additional configuration and may not be as well-suited to specific DevOps tasks as ClaX. Depending on your use case, ClaX can be a valuable addition to your toolkit, enhancing your ability to manage and automate remote tasks efficiently.

Get Started with ClaX

Ready to give ClaX a try? Head over to the GitHub releases page to download the latest binary for your platform.

Installation takes just a minute:

  1. Drop the tar or zip archive onto your server and extract the clax binary.

  2. Create a clax.ini config file, there’s an example included. Choose between Basic HTTPs
    or Certificate based authentication.

  3. Run clax -l clax.log -c clax.ini to start it as a daemon.

Now ClaX is running as a service exposing the HTTP API on the configured port.

The ClaX documentation contains examples of how interact with the API using cURL. There are also client libraries available for Node.js and Python that make it even easier work with the API by handling connections, serialization, and responses.

Windows Service

ClaX can also be run as a Windows service, making it even more versatile for managing your Windows servers. Here’s how you can install and control ClaX as a Windows service using the sc command:

  • Install ClaX service (make sure there’s a space after binPath=):
    sc create clax binPath= "C:\clax.exe -l C:\clax.log -c C:\clax.ini" start= auto
    
  • Start the service:
    sc start clax
    
  • Query the service status:
    sc query clax
    
  • Stop the service:
    sc stop clax
    

How to configure certificate-based authentication

Here’s instructions to use the most secure, SSL certificate-based authentication with ClaX:

  1. Generate a Certificate Authority (CA) Certificate:
    • Start by creating a CA certificate, which will act as the root of trust for your ClaX setup. This certificate will be used to sign both the server and client certificates. Run the following command:
    openssl req -out clax_ca.pem -new -x509 -subj '/CN=ClaxCertificateAuthority'
    

    This command generates a CA certificate in the file clax_ca.pem with the common name “ClaxCertificateAuthority.”

  2. Create a Serial File:

    • To manage signed certificates, create a serial file. Use the following command:
    echo -n '00' > clax_file.srl
    
  3. Generate a Server Certificate:
    • Now, generate a server certificate for ClaX, which will be used for server-side SSL authentication. Use these commands:
    openssl genrsa -out clax_server.key 2048
    openssl req -key clax_server.key -new -out clax_server.req
    openssl x509 -req -in clax_server.req -CA clax_ca.pem -CAkey privkey.pem -CAserial clax_file.srl -out clax_server.pem -subj '/CN=clax-server'
    

    These commands create a server key (clax_server.key), a certificate signing request (clax_server.req), and the server certificate (clax_server.pem). Customize the common name (‘/CN’) to match your ClaX server’s name.

  4. Generate a Client Certificate (Optional):

    • If you require SSL client verification, generate client certificates for ClaX. These certificates allow clients to authenticate themselves to the ClaX server.
    • Use commands similar to those in step 3 to create client keys, certificate signing requests, and client certificates.
  5. Convert Client Certificate to PKCS12 (Optional):
    • If you’ve generated client certificates and want to use them in applications supporting PKCS12 format, convert them with the following command:
    openssl pkcs12 -export -in clax_client.pem -inkey clax_client.key -out clax_client.p12
    

    This command creates a client PKCS12 certificate in the file clax_client.p12.

  6. Configure clax.ini File:

    • Open your clax.ini configuration file, and add or update the SSL section with the paths to the CA certificate, server certificate, and server key. The clax.ini file should have a section like this:
    [ssl]
    enabled = yes
    verify = yes
    cert_file = clax_server.pem
    key_file = clax_server.key
    ca_file = clax_ca.pem
    
  • Adjust the paths in the cert_file, key_file, and ca_file parameters to match the location of your SSL certificates. These settings ensure that ClaX uses SSL authentication with the generated certificates.

By following these steps, you’ll have created SSL certificates for ClaX and configured them in the clax.ini file, enabling secure SSL communication for your ClaX server. Clients connecting to ClaX will use these certificates for authentication and encryption, enhancing the security of your remote deployments and file exchanges.

We hope you find ClaX useful! Let us know if you have any feedback by opening an issue on the GitHub repo. The project is open source and we welcome contributions from the community.

One of the great initiatives this year for Clarive is the new Bantotal-Clarive integration packaged into a ready-to-use solution and distributed directly through the new and exciting BDevelopers marketplace.

We’ve worked closely with DLYA, our partner and the vendor behind Bantotal, to create a comprehensive offering for Bantotal clients and prospects for setting-up a delivery toolchain on top of their Bantotal implementations. Clarive can be the perfect solution for you if:

  1. You and your organization would like to create a continuous delivery process around and for your Bantotal customizations and vendor packages and patches (called “zero deliveries”).

  2. Coordinate other DevOps pipelines already in place for non-Bantotal systems, but that need to be orchestrated with the rest of your banking core.

DevOps is key for making financial systems changes flow at faster speeds without sacrificing quality. The Bantotal platform can greatly benefit from launching DevOps initiatives.

  • managing and deploying to QA and preproduction environments

  • deploy and rollbacking out of production environments

  • orchestrating the deployment of dependent systems

  • making banking core and mission critical changes predictable and repeatable

  • promoting a culture of safer changes and feedback loops withing the teams that work around Bantotal

Don’t hesitate to get in touch with us or with the Bantotal team.

For more information, please read our Bantotal solution brief.

Source Code Maturity levels

Have you jumped into DevOps wagon already? You probably have. But perhaps you still not sure if you are lacking a certain tool in your toolbox if you are working currently with DevOps.

Or maybe your organization or team is starting to plan to fully embrace DevOps and your team is researching what is exactly what to need to install in order to have the perfect toolchain. Perhaps you have a gap in some processes that you are not even aware of. Establishing a good and solid DevOps toolchain will help determine ahead of time the grade of the success of your DevOps practices.

In this blog post, we will be exposing maturity level checklists for different DevOps areas so you have an idea where you at in terms of Continuous Delivery.

We will review the maturity levels from the following DevOps aspects:

  • Source code management
  • Build automation
  • Testing
  • Managing database changes
  • Release management
  • Orchestration
  • Deployment and provisioning
  • Governance, with insights

Source code management tool

Commonly known as repositories. It works as a version control and can be used to keep track of changes in any set of files. As a distributed revision control system it is aimed at speed, data integrity, and support for distributed, non-linear workflows.

This is the maturity level checklist. we go from a none or low maturity level to a high maturity state:

  • No version control
  • Basic version control
  • Source/library dependency management
  • Topic branches flow
  • Sprint/project to branch traceability

Source Code Maturity levels

Build automation tool

Continuous Integration (CI) is a software development practice that aims for a frequent integration of individual pieces of work. Commonly each person integrates at least once per day giving place to several integrations during the day. Each integration should be verified by an automated Build Verification Test (BVT). These automated tests can detect errors just in time so they can be fixed before they create more problems in the future. This helps to reduce a lot of integration issues since this practice allows to develop faster and in a more efficient way.
This is the automation maturity checklist to see how you are doing in your CI:

  • No build automation. Built by hand. Binary check-in.
  • Build automated by central system
  • Reusable build across apps/projects
  • Continuous/nightly builds
  • Feedback loop for builds

Automation Maturity Levels

Testing framework

Testing automatization can be in code, systems, service etc. This will allow the testing each modification made in order to guarantee a good QA. Even the daily or weekly release of code will produce a report that will be sent every early morning. To accomplish this you can install the Selenium app in Clarive.

This checklist will help to determine your testing practices level:

  • No tests
  • Manual tests
  • Automated unit/integration tests
  • Automated interface tests
  • Automated and/or coordinated acceptance tests
  • Test metrics, measurements, and insights
  • Continuous feedback loop and low test failure

Testing Maturity Levels

Database Change Management

It’s important to make sure database changes be taken into consideration when releasing to production. Otherwise, your release team will be working late at night trying to finish up a release with manual steps that are error-prone and nearly impossible to rollback.

Check what is your team’s database management current state:

  • Manual data/schema migrations
  • Automated un-versioned data/schema migrations
  • Versioned data/schema migrations
  • Rollback-enabled data/schema migrations

Database matutity levels

Since databases schema changes are sometimes delicate, make sure to include your DBA team into the peer review process, so that changes are 1) code; 2) can be merged and patched; 3) can be code reviewed.

Release Management and Orchestration

You can fully orchestrate tools that are involved in the process and manage your release milestones and stakeholders with Clarive.

Imagine that a developer makes a change in the code after this happens you need to promote the code to the integration environments, send notifications to your team members and run the testing plan.

Are you fully orchestrating your tools? Find out with this checklist:

  • Infrequent releases, releases need manual review and coordination
  • Releases are partially automated but require manual intervention
  • Frequent releases, with defined manual and automated orchestration and calendaring
  • Just-in-time or On-demand releases, every change is deployed to production

Orchestration Maturity Levels

Deployment tool

Deploying is the core of how you release your application changes.

How is your team deploying?:

  • Manual deployment
  • Deployment with scripts
  • Automated deployment server or tool
  • Automated deployment and rollback
  • Continuous deployment with canary, blue-green and feature-enabling technology

Deployment Maturity Levels

Provisioning

As part of deployment, you should also review your provisioning tasks and requirements. Remember that it’s important to provision the application infrastructure for all required environments, keep environment configuration in check and dispose of any intermediate environments in the process.

Yes, provision has also several maturity levels:

  • You provision environments by hand
  • Environment configuration with scripts as part of deployment
  • Provisioning of disposable environments with every deployment
  • Full provisioning, disposing and infrastructure configuration as part of deployment
  • Full tracking of environment-application dependencies and cost management

Provisioning Maturity Levels

We have come a long way doing this with IaC (Infrastructure as Code). Nowadays a lot can be accomplished with less pain using technologies such as containers and serverless, but you still need to coordinate all cloud (private and public) and related dependencies, such as container orchestrators.

In your path to provision automation and hands-free infrastructure, make sure you have a clear (and traceable) path to the Ops part of your DevOps team or organization, making sure to avoid bottlenecks when infrastructure just needs a magic touch of the hand. One way of accomplishing that is to have a separate stream or category of issues assigned to the DevOps teams in charge of infrastructure provisioning. We’ll cover that on a later blog post.

With the right reports, you’d be amazed by how many times releases get stuck in infrastructure provisioning hell…

Governance

Clarive has also productivity and management tools such as with Kaban swimlanes, planning, reports and dashboards that give managers tools to identify problems and teams a way to quickly check overall performance of the full end-to-end process.

Here are the key points to make sure you evolve the overall governance of your DevOps process:

  • There is no end-to-end relationship between request (why) and release (when, how, what)
  • Basic Dev-to-Ops traceability, with velocity and release feedback
  • Full traceability from request to deployment
  • Immediate feedback and triggers

Maturity Levels of Source Code Management

There you go, let’s devops like the grownups do

In this post, we have exposed the main Continuous Delivery aspects that every DevOps team should be looking forward to improve and their respective readiness levels. So go with your team and start planning a good DevOps adoption plan 😉


Schedule a demo with one of our specialists and start improving your devops practices.


En esta entrada vamos a explicar cómo realizar despliegues de aplicaciones utilizando Clarive EE tanto a la tienda Google Play Store, como a la tienda de iOS (Apple Store) gracias a los Clarive Plugins.

Este proceso, no requerirá ningún conocimiento de programación adicional. Gracias a la interfaz que nos ofrece el diseñador de reglas podremos configurar los despliegues.

Para más información acerca de Clarive y los elementos que utilizamos, consulta nuestro Docs.

Desplegando a Google Play Store
Aspectos generales

Para el caso de las aplicaciones Android, se utiliza el plugin de Gradle para la compilación de la aplicación, y el plugin de Fastlane para el envío a la Play Store automáticamente.

Configuración

Previamente, es necesario que ya exista una primera versión de la aplicación subida para que se pueda realizar el despliegue de forma automática.

Creamos un Generic Server desde el panel de Resources->Server. En este servidor tenemos la aplicación junto con Gradle y Fastlane instalados.

Una vez hemos configurado el servidor, vamos al diseñador de reglas y creamos una nueva de tipo Pipeline en el panel de Admin.

Utilizamos la fase PRE para compilar la aplicación. Para ello arrastramos la operación Gradle compile:

Dentro de la configuración de la operación, seleccionamos el servidor anteriormente configurado, y completamos los campos para que realice la compilación.

A continuación, arrastramos la operación Fastlane task a la fase RUN para configurar el envío de la aplicación a la Play Store:

Completamos el formulario de la operación con todos los datos necesarios para el envío.

Con esto ya tenemos el despliegue a la Play Store preparado. A continuación, añadimos las operaciones para desplegar en la Apple Store con el mismo Pipeline.

Desplegando a Apple Store
Aspectos generales

En este caso, se utiliza el plugin de Fastlane para compilar y enviar la aplicación a la Apple Store automáticamente. Es necesario que Xcode esté instalado junto con Fastlane.

Configuración

Creamos un Generic Server, al igual que se hizo en el caso anterior. En este servidor tenemos la aplicación junto con Fastlane y Xcode instalados.

También, debemos configurar las credenciales de acceso a nuestra cuenta de la Apple Store en el Recurso iOSCredentials, desde el panel de Resources->iOS.

Al igual que en el caso para Android, vamos a compilar nuestra aplicación en la fase PRE, con la operación Fastlane task:

Dentro de la configuración de la operación, seleccionamos el servidor, las credenciales, y rellenamos el resto de campos necesarios.

A continuación, configuramos el envío de la aplicación en la Apple Store. Usando la misma operación Fastlane task en la fase RUN.

Seleccionamos la opción ´Upload App´, y completamos los campos.

Con este Pipeline ya tenemos configurada la publicación automática tanto en la Play Store como en la Apple Store.

Para cualquier duda, puedes ponerte en contacto con nosotros en Clarive Community.

Docker image management

The problem at hand

The situation with the DevOps toolchain is that it just has too many moving parts. And these moving parts have become a cumbersome part of delivering applications.

Have you stopped to think how many different tools are part of your pipeline? And how this is causing your delivery to slow-down?

These might be some of the problems you could be facing when setting up your continuous delivery pipeline:

  • Changes in the application require changes in the way it’s built/deployed

  • New components require new tools

  • Many build, test, and deploy tools have plenty of dependencies

The container bliss

Containers are basically lightweight kits that include pieces of software ready to run the tasks in your pipeline. When containers are used as part of the pipeline, they can include all the dependencies: code, runtime, system tools, system libraries, settings. With containers, your software will run the same pipeline no matter what is your environment. You can run the same container in development and staging environments without opening Pandora’s box.

Containers are the way to be consistent in your CI/CD and releasing/provisioning practices.

Other advantages of containers are:

  • Containers can be versioned

  • Containers can hold the most relevant DevOps infrastructure

  • Containers are cheap and fast to run

  • Ops can let Dev drive CI/CD safely (by giving Devs templatized containers)

Clarive and Docker: what a combo!

Docker is therefore a great companion to your DevOps stack. Docker containers allow your project and repository rulebooks to run pipelines alongside any necessary infrastructure without requiring additional software packages to be installed in the Clarive server. Clarive runs your DevOps pipelines within managed containers.

By using containers in Clarive you can:

  • Isolate your users from the server environment so that they cannot break anything.

  • Version your infrastructure packages, so that different versions of an app can run different versions of an image.

  • Simplify your DevOps stack by having most of your build-test-deploy continuous delivery workflows to run in one server (or more, if you have a cluster of Clarive servers), instead of having to install runners for every project everywhere.

Docker_inClarive

Clarive and Docker flowchart


Curating a library of DevOps containers

Using a registry is a good way of keeping a library of containers that target your continuous delivery automation. With Clarive you can maintain a copy of a local registry that is used exclusively for your DevOps automation.

Defining “natures”

Each repository in your project belongs or implement one or more natures. The nature of your code or artifacts define how they are going to be implemented. A nature is a set of automation and templates. These templates can use different Docker containers to run.

For example, your application may require Node + Python, so two natures. If you have these natures in templates they will be consistent and will help developers comply to a set of best practices on how to build, validate, lint, test and package new applications as they move to QA and live environments.

Running commands on other servers

Clarive uses Docker for running shell commands locally. That guarantees that rulebooks (in the projects .clarive.yml file) will not have access to the server(s) running your pipelines.

But you can still run shell commands in other servers and systems, such as Linux, Windows, various Unixes flavors and other legacy systems (including the mainframe!) using the host: option in the shell: command.

How do I use my own containers

If the container is not available on the Clarive server, the Clarive rulebook downloads the container from Docker Hub.

So, to use your own containers, you have two options:

  • Upload them to Docker Hub. Then use them from your rulebook. Clarive will download it on the first run.

  • Install it on your Clarive server. On the first run Clarive will build another version of your container based on Clarive’s default Dockerfile, called clarive/your container. You don’t need to prefix clarive/ into the name, that’s done for you automatically.

Docker containers in your pipeline

Manage all active Docker containers in your pipeline from within Clarive


Getting started today

Using containers is an important step in implementing a continuous delivery and continuous deployment process that is streamlined and avoids environment clutter.

Head over to our 30-day trial and let Clarive to run your DevOps automation in docker containers for better consistency and easy setup of your temporary environments.


Learn more about Clarive Docker admin interface with this blog post and learn how to manage containers and docker images.


Running pipeline jobs to critical environments often requires a scheduled execution to take place. Clarive scheduled jobs run always in 3 main steps, called “Pre”, “Run” and “Post”.

Why run a pipeline in phases?

Most of the time the entire deployment job should not run all of its tasks at the scheduled time but as soon as the job is scheduled.

There are several phases or stages to every scheduled deployment, most of which can run as early as it’s scheduled. Tasks such as building, packaging, testing and even provisioning infrastructure can take place earlier if they do not impact on the productive environments.

When defining a pipeline, always think of what can be detected in earlier phases so that the evening deployment run will not fail on something that could have been checked previously.

Separating your pipeline into diferent stages

Separating your pipeline into different stages

 

Pipeline phases

Here are the different pipeline deployment phases you can run in Clarive.

Deployment preparation (PRE)

The deployment pipeline will take care of:

  • Creating the temporary deployment directory

  • Identifying the files changed to be deployed (BOM)

  • Checking out the files related to the changesets to be deployed from either code repositories (code) or directly attached to the changesets (SQL).

  • Renaming environment specific files (i.e web{PROD}.properties will be used just for deployments to the PROD environment)

  • Replacing variables inside the files (i.e. it will replace ${variable} with the real value of the variable configured for the project affected for the target environment)

  • Nature detection Clarive will identify the technologies to deploy analyzing the BOM

Common deployment operations

Common deployment operations

 

Deployment operations (RUN)

Actual deployment operations will be executed in this phase of the Clarive job (movement of binaries, restart of servers, check the proper installation, etc.)

Deployment operations

Deployment operations

 

Post-deployment operations (POST)

After the deployment is finished Clarive will take care of any task needed depending on the final status.

Some of the typical operations performed then are:

  • Notify users

  • Transition changesets states

  • Update repositories (i.e. tag Git repositories with an environment tag)

  • Synchronize external systems

  • Cleanup temporary directories locally and remotely

  • Synchronize related topics statuses (defects, releases, etc.)
     

Post deployment operations

Post deployment operations

 

Recap

Regardless of if you use Clarive or not, when defining your deployment pipelines always think on these three stages or phases.

  • PRE – everything that can run as soon as it has been scheduled
  • RUN – crucial changes that run at the scheduled time
  • POST – post activities or error recovery procedures

Happy DevOps!


Try Clarive now and start with your DevOps journey to continuous delivery.


Elixir logo

Environment provisioning is a key part of a continuous delivery process. The idea is simple: we should not only build, test and deploy application code, but also the underlying application environment.

Are your environments being provisioned on-demand as applications deploy? Can Devs request new environments to fit changes in how the application is built? Is environment configuration and modeling built into the application deployment process?

What is the “environment”

The application environment consists in 3 main areas:

  • Infrastructure

  • Configuration

  • Dependencies

Infrastructure is the most important element of the environment, as it defines where the application will run, the specific configuration needs and how dependencies need to interact with the application.

Configuration is the next most important aspect of the application environment. Configuration dictates both how the application behaves in a given infrastructure and how the infrastructure behaves in relation to the underlying application.

Dependencies are all the different modules or systems an application dependes on, from libraries to services or other applications.

What is infrastructure today?

The concept of infrastructure refers to the components, hardware, and software, needed to operate an application service or system. But, as hardware has been abstracted away in favor of scalable, reliable and affordable solutions, the true definition of infrastructure is something different to every application.

We could say infrastructure now advances almost as fast as application technologies and languages themselves. Infrastructure has melded with the application in such a way that now we have to basically pick the infrastructure as part of the architecture decisions.

Infrastructure has never stopped evolving:

  • virtualization, or how to provision infrastructure in minutes instead of days
  • containerization, or how to provision infrastructure in seconds instead of minutes
  • the cloud, or how to provision infrastructure you do not own
  • serverless, or how not to provision infrastructure as it will provision itself on demand
  • and so on…

And the IT infrastructure is not just vertical, but also horizontal, as platforms can also connect many services pods and execution silos.

How about configuration and dependencies?

Configuration and dependencies are very profound topics that deserve their own articles. But let’s say that today both tend to be containerized one way or the other, meaning both programming languages and infrastructure technologies promote the packaging of environment configuration and dependencies as part of the deliverable.

Business Challenges

For any organization to successfully test and release new applications or application versions, the appropriate environment(s) needs first to be in place. The enterprise follows many different procedures for supplying environments to applications. It may be manual or automated, or a combination of both.

It also may be in the hands of different teams or roles within the enterprise, from developers to operations and sysadmins.

Environment provisioning is how an organization manages infrastructure before, during and after the lifespan of an application or service. Environment provisioning is independent of Services Oriented Architecture (SOA), micro-services or a full MVC application, or any other execution structure as running services or application all need to

  • Before: setup development and unit-testing environments and basic SCM controls. Provision environments as the delivery flow advances.

  • During: applications change with new requirements or bugs, and they may need to scale current or provision new features of its infrastructure as they go.

  • After: when done, decommissioning environments is important to deallocate valuable resources.

Business Benefits

Now why would you want to automate environment provisioning? Or yet, how would you demonstrate that spending time automating and building provisioning into continuous deployment can be beneficial at your organization?

Reduce Average Time to Provision

This KPI offers a direct measurement of the end-to-end process for provisioning new infrastructure, including both physical and virtual machines, processing power, storage, databases, middleware, communication and networking infrastructure among others. IT managers and business users can employ this metric as an indication of the degree to which IT is supporting the ability needs of the business.

Reduce Service Complexity

Centralizing all environment automation operations in one place simplifies making decisions that affect the business, permitting a quick turnover of scalability investments and impact analysis and change execution when rearranging infrastructure resources to meet new business requirements.

Get Service Allocation Insights

Measure how to and for how long infrastructure is being used, how fast it’s being delivered and disposed of and what business requirements are behind each request.

Greater service efficiency and decreased costs

Provisioning and disposing of environments following establish patterns and templates help reduce waste and complexity. Modular environments are also easier to debug and scale as changes only impact in one application instead of a cluster of applications. The bottomline is that modular environment reduce the costs of running tests and are easier to throttle in production, which also translate in applications that only consume what they need at a given time.

Use Cases

Here a sample of use cases that triggers an organization to implement a provisioning solution:

  • Onboarding new application teams have to deal with organizational complexity and adhere to standards and release policies. Provisioning a baseline environment catalog will help circumvent organizational obstacles and technological challenges.

  • Coordinating code releasing and provisioning to make sure infrastructure changes arrive just-in-time with the code that requires it.

  • Dev and QA environment provisioning, to simplify and control the process of procuring and automating environment generation.

  • Self-service provisioning, offering users a “catalog” with a set of service requests and tasks that can be launched and managed.

  • Infrastructure code is being pushed and continuously deployed by developers (ie. Dockerfiles or Chef recipes), but require more control and end-to-end visibility.

Get started today

If you feel you’re not taking full advantage of environment provisioning, here’s a few ideas for you to get started ASAP:

  • When starting new apps, or designing architecture, try to predict how environments will be provisioned at different stages during the delivery pipeline, specially QA and Production.

  • Build apps that can have features tested on-demand by spinning throw-away environments.

  • Have your DevOps processes (ie. scripts, builds) run in containers too.

  • Add environment provisioning to your CI/CD pipeline.

  • Offer users a way to control and define the environment rules and infrastructure needs, as code if possible.

In general, containers are a great way to go thanks to it’s infrastructure as code nature and natural DevOps fit. Serverless on the other hand can abstract away many environment concerns and make better use of resources as applications grow.


Let Clarive be your best friend in your DevOps journey to continuous delivery. Start with your 30-day trial now.