Rule Execution Monitoring with Clarive Keeper Clarive Announces Rule Keeper: Enhanced Monitoring and Debugging for Your DevOps Workflows At Clarive, we’re always striving to make our platform more transparent, debuggable, and user-friendly. We’re excited to announce the latest update: Rule Keeper, a powerful monitoring and debugging tool that provides real-time insights into your rule executions. This new feature allows you to track, monitor, and debug rule performance with unprecedented visibility, helping you identify bottlenecks, troubleshoot issues, and optimize your DevOps workflows. In this blog post, we’ll dive deep into the technical aspects of using Rule Keeper, explore its monitoring capabilities, and provide practical examples to help you get the most out of this powerful debugging tool. What is Rule Keeper? Rule Keeper is an interactive monitoring system that tracks rule execution, performance metrics, and provides detailed insights into rule behavior. It’s designed to help administrators and developers understand how rules are performing, identify potential issues, and optimize their DevOps processes. With Rule Keeper, you can monitor rule executions in real-time, analyze performance patterns, and quickly debug problems when they occur. Why Use Rule Keeper? Enabling Rule Keeper in your Clarive environment offers several advantages: Real-time Monitoring: Watch your rules execute in real-time with live updates and performance metrics. Performance Optimization: Identify slow-running rules and bottlenecks that could impact your deployment pipeline. Quick Debugging: Rapidly locate and resolve rule execution issues with detailed trace information. Capacity Planning: Understand resource requirements for rule execution and plan your infrastructure accordingly. Compliance and Auditing: Track rule execution for audit and compliance purposes with detailed logs. Setting Up Rule Keeper Configuring Rule Keeper involves a simple configuration change in your Clarive instance. This feature is controlled entirely by a single parameter in your configuration file. Basic Configuration To enable Rule Keeper, you need to modify your config/clarive.yml file: # rule tracing and management rule_keeper: 1 After making this change, restart your web server for the configuration to take effect. No additional setup is required – the monitoring is automatically active when enabled. Verifying the Setup Once configured, you can verify that Rule Keeper is working by running: cla keeper -e 'ps' -c myconfig If you see a warning message about rule_keeper being false, double-check your configuration file and ensure you’ve restarted the web server. Using Rule Keeper Rule Keeper provides both interactive and command-line interfaces for monitoring your rules. Let’s explore the most powerful features. Interactive Mode Start the interactive monitoring session: cla keeper -c myconfig This opens a REPL (Read-Eval-Print Loop) where you can run various monitoring commands. Command-Line Mode Execute single commands and exit immediately: cla keeper -e 'ps' -c myconfig cla keeper -e 'ps W' -c myconfig cla keeper -e 'ps t=form' -c myconfig Key Monitoring Commands Process Listing with ps The ps command is your primary tool for viewing rule executions. It provides a comprehensive view of all rule activities with pagination support. Basic Usage: ps Advanced Options: – W – Wide mode (shows start/end timestamps) – R – Show only currently running rules – t=type1,type2 – Filter by rule types (comma-separated) Examples: ps # Show all rules ps W # Wide mode with timestamps ps R # Only running rules ps t=rule # Only rule type executions ps t=rule,workflow # Multiple types The output shows: – SEQ: Sequence number for tracking – ID: Rule identifier – RULE: Rule name – TYPE: Rule type (form, workflow, etc.) – STATUS: Current status (running, ok, ko, killed) – HOST: Hostname where rule is executing – USER: User executing the rule – PID: Process ID – DURATION: Execution time Real-time Monitoring with top The top command provides real-time monitoring with auto-refresh capabilities, similar to the Unix top command. top Interactive Controls: – q – Quit the top view – w – Toggle wide mode (show timestamps) – r – Toggle running-only mode – t – Set type filter This is particularly useful for monitoring active deployments or troubleshooting ongoing issues. Debugging with Trace and Code Commands When you encounter a rule that’s stuck or behaving unexpectedly, Rule Keeper provides powerful debugging tools. Using trace for Stack Analysis The trace command shows the complete stack trace for a specific rule execution: trace 402 This displays the full call stack, helping you understand exactly where the rule execution is occurring and what might be causing issues. Using code for Context Analysis The code command shows the actual DSL code around the line where an error occurred: code 402 This displays the code with line numbers, highlighting the problematic area. You can also specify the number of context lines: code 402 10 This shows 10 lines before and after the error location, providing more context for debugging. Complete Debugging Workflow Here’s a typical debugging workflow when you encounter a problematic rule: Identify the Issue: Use ps to find the problematic rule execution Kill the Process: Use kill to stop the stuck rule Analyze the Trace: Use trace to see the stack trace Examine the Code: Use code to see the actual code around the error Fix and Retry: Make the necessary corrections and restart the rule Example Workflow: # Find stuck rules ps R # Kill a stuck rule (using sequence number 402) kill 402 # Analyze what happened trace 402 # Look at the code around the error code 402 5 Advanced Monitoring Features Process Management Kill Processes: kill <signal> <seq> If no signal is specified, it uses WINCH (28). This is useful for gracefully stopping stuck rules. Process Information: pid <pid> Shows detailed process information for a specific PID. Data Management Cleanup Old Entries: cleanup old # Delete entries > 1h old cleanup truncate # Delete all entries Remove Orphaned Entries: roadkill This removes entries where the process is missing or not owned by the current user. Logging and Output Log to File: log debug_output.txt Close Log: close Performance Optimization with Rule Keeper Rule Keeper isn’t just for debugging – it’s also a powerful tool for performance optimization. Identifying Bottlenecks Use the ps command with duration sorting to identify slow-running rules: ps W Look for rules with unusually long durations and investigate further. Monitoring Resource Usage The top command helps you monitor resource usage in real-time, allowing you to identify patterns and optimize accordingly. Capacity Planning By analyzing rule execution patterns, you can better understand your infrastructure requirements and plan for scaling. Best Practices Regular Monitoring Set up regular monitoring sessions to check rule health: # Quick health check cla keeper -e 'ps R' -c myconfig # Detailed analysis cla keeper -e 'ps W' -c myconfig Debugging Workflow Establish a consistent debugging workflow: Always start with ps to get an overview Use top for real-time monitoring of active issues Use trace and code for detailed analysis Document findings and solutions Performance Tracking Regularly review rule execution times and identify optimization opportunities. Troubleshooting Common Issues Rule Keeper Not Working If Rule Keeper isn’t showing any data: Verify rule_keeper: 1 is set in your config Restart the web server Check that rules are actually running No Rules Showing If ps shows no results: Ensure rules are actively executing Check the time range (older entries may be cleaned up) Verify you’re looking at the correct environment Permission Issues If you encounter permission errors: Ensure you have the necessary permissions to access the database Check that the MongoDB connection is working Verify your user has access to the rule_keeper collection Conclusion Rule Keeper represents a significant advancement in Clarive’s monitoring and debugging capabilities for your custom rules. Whether you are troubleshooting a complex deployment issue, optimizing performance, or simply monitoring your DevOps workflows, Clarive’s Rule Keeper offers the visibility and tools necessary for understanding what’s running where, how and what. We encourage all users to explore these features and incorporate Rule Keeper into their daily operations, just turn the toggle on and give it a try. Especially if topic loading is taking a while in your installation. It could be due to a long running form, field or workflow rule. Stay tuned for more updates and features! If you have any questions or need assistance with your Rule Keeper setup, please do not hesitate to contact our support team. Happy monitoring! For more information about Rule Keeper and other Clarive features, visit our documentation.
New Git Timesync Rule Operation We’re excited to introduce you to the new operation in Clarive: Git Timesync available in the upcoming 7.10.6 release. This operation is specifically designed to update the file timestamps in your job directory to match their respective Git commit timestamps. Let’s dive in and see how it works and how you can utilize it in your Clarive pipeline rule. Motivation When Git performs a checkout or clone operation, it intentionally does not update the timestamp of the files to reflect their last commit dates. Instead, all files receive the timestamp of when the clone or checkout occurred. The primary reason for this behavior is performance. Updating file timestamps to their respective commit dates would require Git to examine the entire commit history of each file, which can be computationally intensive and slow, especially for large repositories with extensive histories. By using the clone or checkout date as the timestamp, Git can quickly populate the working directory without any unnecessary overhead. Furthermore, maintaining consistent timestamps helps build systems, like make, determine if a file needs to be rebuilt, ensuring efficient and predictable builds. What is Git Timesync? Git Timesync is a Clarive rule operation that updates the timestamps of files in the job directory so they align with the most recent Git commit timestamps. However, files without a Git timestamp or those from other repositories that don’t match the job’s repository will remain unchanged. Configuration Essentials Before using Git Timesync, it’s crucial to understand its configuration: Path: This is the relative directory within each Git repository to be processed. By default, it uses . which means all files controlled by Git. Git Repositories: An optional list of Git Repositories that you wish to process. If you leave it blank, all Git Repositories included in the job where the rule is running will be processed. Remember, you can use multiple Timesync operations if you’re dealing with different path and repository combinations. Note: If you’re working with a sizable repository or one with an extensive commit history, Git Timesync is very optimized, but it may still take a while to process all the timestamps. Dependencies Ensure you execute Git Timesync after these operations: Load Job Items into Stash Checkout Job Items or another equivalent repository checkout. How to Use Git Timesync in a Clarive Pipeline Rule? Open your desired Clarive pipeline rule. Navigate to the palette. Simply drag the Git Timesync operation from the palette and drop it into your rule after the aforementioned dependencies. Configure the Path and Git Repositories as per your requirements. Save your rule, deploy a changeset and you’re good to go! Git Timesync is a handy operation for those who wish to synchronize file timestamps with their Git commit timestamps, ensuring consistency and clarity. So, next time you’re working on a Clarive project, give Git Timesync a try and let it handle the timestamp synchronization for you! Happy clariving!
Introducing ClaX: Your Swiss Army Knife for Remote Deployment As a DevOps engineer, having the ability to securely and reliably execute commands, transfer files, and automate workflows across your infrastructure is absolutely essential. That’s why we’re excited to announce the release of ClaX – an open source remote deployment agent that makes this possible. We understand the critical need for efficient tools that streamline remote deployment, file exchange, and command execution across a variety of platforms. That’s where ClaX comes into play. ClaX is a portable HTTP(s) remote deployment agent developed by Clarive, designed to empower DevOps professionals. Whether you deploy to Windows, Mac, or Linux servers, ClaX simplifies these tasks as an alternative to SSH or other more involved methods. What is ClaX? ClaX is more than just your run-of-the-mill remote agent. It’s a powerful and versatile tool that can run commands, exchange files, and handle more complex operations with ease. One of its standout features is its ability to read requests from stdin and write responses to stdout, making it a perfect fit for inetd integration. ClaX is a lightweight HTTP-based agent that can be installed on Windows, Linux, and UNIX servers. It allows you to: Run commands and scripts remotely Upload and download files Integrate with continuous deployment workflows The ClaX agent exposes a REST API that can be called from any language or tool that supports HTTP requests. For security, the API uses SSL and access control via HTTP basic auth or client certificates. Some example uses cases: Deploying application code to servers Running data migration scripts Automating post-deployment checks Collecting logs and artifacts Synchronizing files across a cluster ClaX handles executing commands asynchronously. It streams the stdout and stderr back in real-time, while also returning the exit code and execution status. For long running processes, you can even set a timeout. Why Use ClaX? There are other tools that can do remote execution and orchestration. However, ClaX has a few advantages: Portable and self-contained – ClaX is a single binary with no dependencies. Just drop it on a server and it works, no complex installation required. It also makes it easy to upgrade by just replacing the executable. Embeddable and composable – ClaX exposes a simple HTTP API that can be called from any language or toolkit. It can be embedded into custom scripts and applications. Lightweight and low overhead – ClaX utilizes a low memory footprint, making it suitable for containers and cloud environments. The API is optimized for performance and scalability. Cross platform – Tested on Linux, UNIX, Windows, and legacy systems like mainframes. Platform Compatibility At Clarive, we understand that DevOps environments are diverse, with various servers running different operating systems. That’s why ClaX has been rigorously tested and proven to work seamlessly on a wide range of platforms, including: Debian GNU/Linux x86_64 FreeBSD 10.3 Mac OS 10.11+ Cygwin x86_64 Windows 2003, 2008, 2012+ Solaris 10 i86pc z/OS 390 Raspbian ARMv7 This extensive platform support ensures that ClaX can be your go-to tool, regardless of your server’s operating system. Contact us if the ClaX binary you need is not available in the download page and we’ll generate it for you. Why not just plain SSH? ClaX can serve as a valuable alternative to using plain SSH, even though the Clarive server also supports SSH. Here are some advantages of using ClaX over SSH in specific scenarios: Simplified Management: ClaX is designed as a remote deployment agent, which means it’s purpose-built for remote tasks. SSH, on the other hand, is a general-purpose remote access and administration tool. When you need to streamline specific tasks like running commands, exchanging files, or managing deployments, ClaX provides a more focused and simplified approach. Security Features: ClaX offers a robust set of security features, including SSL support and basic authentication, ensuring secure communication and access control. While SSH is inherently secure, it might require additional configurations for specific use cases. ClaX simplifies this by providing security out of the box. Platform Independence: ClaX is compatible with a wide range of platforms, including Windows, Mac, and various Linux distributions. SSH is primarily associated with Unix-like systems, making ClaX a more versatile choice for heterogeneous server environments. If you have mixed OS environments, ClaX can be a unifying solution. Service Integration: ClaX can be run as a Windows service, allowing it to seamlessly integrate with Windows server environments. SSH does not function as a service in the same way, and while it can be used on Windows, ClaX simplifies the process of running tasks on Windows servers. REST API: ClaX offers a REST API that simplifies automation and integration with other tools. SSH is a command-line tool and lacks the comprehensive API capabilities that ClaX provides. This API can be a game-changer when you need to automate deployment and management tasks. Inetd Integration: ClaX’s ability to read requests from stdin and write responses to stdout makes it suitable for inetd integration. This is particularly useful when you want to streamline request handling in a more controlled manner. SSH, on the other hand, is not designed for this level of integration. Configurability: ClaX’s INI-based configuration file makes it easy to customize and adapt to specific requirements. SSH typically relies on a more complex configuration file, which might be overkill for scenarios where simplicity is key. In summary, ClaX is a specialized tool that excels in scenarios where you need a streamlined, secure, and platform-independent solution for remote deployment, file exchange, and command execution. While SSH is an essential and versatile tool for remote access and administration, it may require additional configuration and may not be as well-suited to specific DevOps tasks as ClaX. Depending on your use case, ClaX can be a valuable addition to your toolkit, enhancing your ability to manage and automate remote tasks efficiently. Get Started with ClaX Ready to give ClaX a try? Head over to the GitHub releases page to download the latest binary for your platform. Installation takes just a minute: Drop the tar or zip archive onto your server and extract the clax binary. Create a clax.ini config file, there’s an example included. Choose between Basic HTTPs or Certificate based authentication. Run clax -l clax.log -c clax.ini to start it as a daemon. Now ClaX is running as a service exposing the HTTP API on the configured port. The ClaX documentation contains examples of how interact with the API using cURL. There are also client libraries available for Node.js and Python that make it even easier work with the API by handling connections, serialization, and responses. Windows Service ClaX can also be run as a Windows service, making it even more versatile for managing your Windows servers. Here’s how you can install and control ClaX as a Windows service using the sc command: Install ClaX service (make sure there’s a space after binPath=): sc create clax binPath= "C:\clax.exe -l C:\clax.log -c C:\clax.ini" start= auto Start the service: sc start clax Query the service status: sc query clax Stop the service: sc stop clax How to configure certificate-based authentication Here’s instructions to use the most secure, SSL certificate-based authentication with ClaX: Generate a Certificate Authority (CA) Certificate: Start by creating a CA certificate, which will act as the root of trust for your ClaX setup. This certificate will be used to sign both the server and client certificates. Run the following command: openssl req -out clax_ca.pem -new -x509 -subj '/CN=ClaxCertificateAuthority' This command generates a CA certificate in the file clax_ca.pem with the common name “ClaxCertificateAuthority.” Create a Serial File: To manage signed certificates, create a serial file. Use the following command: echo -n '00' > clax_file.srl Generate a Server Certificate: Now, generate a server certificate for ClaX, which will be used for server-side SSL authentication. Use these commands: openssl genrsa -out clax_server.key 2048 openssl req -key clax_server.key -new -out clax_server.req openssl x509 -req -in clax_server.req -CA clax_ca.pem -CAkey privkey.pem -CAserial clax_file.srl -out clax_server.pem -subj '/CN=clax-server' These commands create a server key (clax_server.key), a certificate signing request (clax_server.req), and the server certificate (clax_server.pem). Customize the common name (‘/CN’) to match your ClaX server’s name. Generate a Client Certificate (Optional): If you require SSL client verification, generate client certificates for ClaX. These certificates allow clients to authenticate themselves to the ClaX server. Use commands similar to those in step 3 to create client keys, certificate signing requests, and client certificates. Convert Client Certificate to PKCS12 (Optional): If you’ve generated client certificates and want to use them in applications supporting PKCS12 format, convert them with the following command: openssl pkcs12 -export -in clax_client.pem -inkey clax_client.key -out clax_client.p12 This command creates a client PKCS12 certificate in the file clax_client.p12. Configure clax.ini File: Open your clax.ini configuration file, and add or update the SSL section with the paths to the CA certificate, server certificate, and server key. The clax.ini file should have a section like this: [ssl] enabled = yes verify = yes cert_file = clax_server.pem key_file = clax_server.key ca_file = clax_ca.pem Adjust the paths in the cert_file, key_file, and ca_file parameters to match the location of your SSL certificates. These settings ensure that ClaX uses SSL authentication with the generated certificates. By following these steps, you’ll have created SSL certificates for ClaX and configured them in the clax.ini file, enabling secure SSL communication for your ClaX server. Clients connecting to ClaX will use these certificates for authentication and encryption, enhancing the security of your remote deployments and file exchanges. We hope you find ClaX useful! Let us know if you have any feedback by opening an issue on the GitHub repo. The project is open source and we welcome contributions from the community.
Bantotal DevOps is here One of the great initiatives this year for Clarive is the new Bantotal-Clarive integration packaged into a ready-to-use solution and distributed directly through the new and exciting BDevelopers marketplace. We’ve worked closely with DLYA, our partner and the vendor behind Bantotal, to create a comprehensive offering for Bantotal clients and prospects for setting-up a delivery toolchain on top of their Bantotal implementations. Clarive can be the perfect solution for you if: You and your organization would like to create a continuous delivery process around and for your Bantotal customizations and vendor packages and patches (called “zero deliveries”). Coordinate other DevOps pipelines already in place for non-Bantotal systems, but that need to be orchestrated with the rest of your banking core. DevOps is key for making financial systems changes flow at faster speeds without sacrificing quality. The Bantotal platform can greatly benefit from launching DevOps initiatives. managing and deploying to QA and preproduction environments deploy and rollbacking out of production environments orchestrating the deployment of dependent systems making banking core and mission critical changes predictable and repeatable promoting a culture of safer changes and feedback loops withing the teams that work around Bantotal Don’t hesitate to get in touch with us or with the Bantotal team. For more information, please read our Bantotal solution brief.
Quick check your continuous delivery maturity Have you jumped into DevOps wagon already? You probably have. But perhaps you still not sure if you are lacking a certain tool in your toolbox if you are working currently with DevOps. Or maybe your organization or team is starting to plan to fully embrace DevOps and your team is researching what is exactly what to need to install in order to have the perfect toolchain. Perhaps you have a gap in some processes that you are not even aware of. Establishing a good and solid DevOps toolchain will help determine ahead of time the grade of the success of your DevOps practices. In this blog post, we will be exposing maturity level checklists for different DevOps areas so you have an idea where you at in terms of Continuous Delivery. We will review the maturity levels from the following DevOps aspects: Source code management Build automation Testing Managing database changes Release management Orchestration Deployment and provisioning Governance, with insights Source code management tool Commonly known as repositories. It works as a version control and can be used to keep track of changes in any set of files. As a distributed revision control system it is aimed at speed, data integrity, and support for distributed, non-linear workflows. This is the maturity level checklist. we go from a none or low maturity level to a high maturity state: No version control Basic version control Source/library dependency management Topic branches flow Sprint/project to branch traceability Build automation tool Continuous Integration (CI) is a software development practice that aims for a frequent integration of individual pieces of work. Commonly each person integrates at least once per day giving place to several integrations during the day. Each integration should be verified by an automated Build Verification Test (BVT). These automated tests can detect errors just in time so they can be fixed before they create more problems in the future. This helps to reduce a lot of integration issues since this practice allows to develop faster and in a more efficient way. This is the automation maturity checklist to see how you are doing in your CI: No build automation. Built by hand. Binary check-in. Build automated by central system Reusable build across apps/projects Continuous/nightly builds Feedback loop for builds Testing framework Testing automatization can be in code, systems, service etc. This will allow the testing each modification made in order to guarantee a good QA. Even the daily or weekly release of code will produce a report that will be sent every early morning. To accomplish this you can install the Selenium app in Clarive. This checklist will help to determine your testing practices level: No tests Manual tests Automated unit/integration tests Automated interface tests Automated and/or coordinated acceptance tests Test metrics, measurements, and insights Continuous feedback loop and low test failure Database Change Management It’s important to make sure database changes be taken into consideration when releasing to production. Otherwise, your release team will be working late at night trying to finish up a release with manual steps that are error-prone and nearly impossible to rollback. Check what is your team’s database management current state: Manual data/schema migrations Automated un-versioned data/schema migrations Versioned data/schema migrations Rollback-enabled data/schema migrations Since databases schema changes are sometimes delicate, make sure to include your DBA team into the peer review process, so that changes are 1) code; 2) can be merged and patched; 3) can be code reviewed. Release Management and Orchestration You can fully orchestrate tools that are involved in the process and manage your release milestones and stakeholders with Clarive. Imagine that a developer makes a change in the code after this happens you need to promote the code to the integration environments, send notifications to your team members and run the testing plan. Are you fully orchestrating your tools? Find out with this checklist: Infrequent releases, releases need manual review and coordination Releases are partially automated but require manual intervention Frequent releases, with defined manual and automated orchestration and calendaring Just-in-time or On-demand releases, every change is deployed to production Deployment tool Deploying is the core of how you release your application changes. How is your team deploying?: Manual deployment Deployment with scripts Automated deployment server or tool Automated deployment and rollback Continuous deployment with canary, blue-green and feature-enabling technology Provisioning As part of deployment, you should also review your provisioning tasks and requirements. Remember that it’s important to provision the application infrastructure for all required environments, keep environment configuration in check and dispose of any intermediate environments in the process. Yes, provision has also several maturity levels: You provision environments by hand Environment configuration with scripts as part of deployment Provisioning of disposable environments with every deployment Full provisioning, disposing and infrastructure configuration as part of deployment Full tracking of environment-application dependencies and cost management We have come a long way doing this with IaC (Infrastructure as Code). Nowadays a lot can be accomplished with less pain using technologies such as containers and serverless, but you still need to coordinate all cloud (private and public) and related dependencies, such as container orchestrators. In your path to provision automation and hands-free infrastructure, make sure you have a clear (and traceable) path to the Ops part of your DevOps team or organization, making sure to avoid bottlenecks when infrastructure just needs a magic touch of the hand. One way of accomplishing that is to have a separate stream or category of issues assigned to the DevOps teams in charge of infrastructure provisioning. We’ll cover that on a later blog post. With the right reports, you’d be amazed by how many times releases get stuck in infrastructure provisioning hell… Governance Clarive has also productivity and management tools such as with Kaban swimlanes, planning, reports and dashboards that give managers tools to identify problems and teams a way to quickly check overall performance of the full end-to-end process. Here are the key points to make sure you evolve the overall governance of your DevOps process: There is no end-to-end relationship between request (why) and release (when, how, what) Basic Dev-to-Ops traceability, with velocity and release feedback Full traceability from request to deployment Immediate feedback and triggers There you go, let’s devops like the grownups do In this post, we have exposed the main Continuous Delivery aspects that every DevOps team should be looking forward to improve and their respective readiness levels. So go with your team and start planning a good DevOps adoption plan 😉 Schedule a demo with one of our specialists and start improving your devops practices.
Entrega de aplicaciones móviles en 5 minutos En esta entrada vamos a explicar cómo realizar despliegues de aplicaciones utilizando Clarive EE tanto a la tienda Google Play Store, como a la tienda de iOS (Apple Store) gracias a los Clarive Plugins. Este proceso, no requerirá ningún conocimiento de programación adicional. Gracias a la interfaz que nos ofrece el diseñador de reglas podremos configurar los despliegues. Para más información acerca de Clarive y los elementos que utilizamos, consulta nuestro Docs. Desplegando a Google Play Store Aspectos generales Para el caso de las aplicaciones Android, se utiliza el plugin de Gradle para la compilación de la aplicación, y el plugin de Fastlane para el envío a la Play Store automáticamente. Configuración Previamente, es necesario que ya exista una primera versión de la aplicación subida para que se pueda realizar el despliegue de forma automática. Creamos un Generic Server desde el panel de Resources->Server. En este servidor tenemos la aplicación junto con Gradle y Fastlane instalados. Una vez hemos configurado el servidor, vamos al diseñador de reglas y creamos una nueva de tipo Pipeline en el panel de Admin. Utilizamos la fase PRE para compilar la aplicación. Para ello arrastramos la operación Gradle compile: Dentro de la configuración de la operación, seleccionamos el servidor anteriormente configurado, y completamos los campos para que realice la compilación. A continuación, arrastramos la operación Fastlane task a la fase RUN para configurar el envío de la aplicación a la Play Store: Completamos el formulario de la operación con todos los datos necesarios para el envío. Con esto ya tenemos el despliegue a la Play Store preparado. A continuación, añadimos las operaciones para desplegar en la Apple Store con el mismo Pipeline. Desplegando a Apple Store Aspectos generales En este caso, se utiliza el plugin de Fastlane para compilar y enviar la aplicación a la Apple Store automáticamente. Es necesario que Xcode esté instalado junto con Fastlane. Configuración Creamos un Generic Server, al igual que se hizo en el caso anterior. En este servidor tenemos la aplicación junto con Fastlane y Xcode instalados. También, debemos configurar las credenciales de acceso a nuestra cuenta de la Apple Store en el Recurso iOSCredentials, desde el panel de Resources->iOS. Al igual que en el caso para Android, vamos a compilar nuestra aplicación en la fase PRE, con la operación Fastlane task: Dentro de la configuración de la operación, seleccionamos el servidor, las credenciales, y rellenamos el resto de campos necesarios. A continuación, configuramos el envío de la aplicación en la Apple Store. Usando la misma operación Fastlane task en la fase RUN. Seleccionamos la opción ´Upload App´, y completamos los campos. Con este Pipeline ya tenemos configurada la publicación automática tanto en la Play Store como en la Apple Store. Para cualquier duda, puedes ponerte en contacto con nosotros en Clarive Community.
Why continuous delivery should run in containers The problem at hand The situation with the DevOps toolchain is that it just has too many moving parts. And these moving parts have become a cumbersome part of delivering applications. Have you stopped to think how many different tools are part of your pipeline? And how this is causing your delivery to slow-down? These might be some of the problems you could be facing when setting up your continuous delivery pipeline: Changes in the application require changes in the way it’s built/deployed New components require new tools Many build, test, and deploy tools have plenty of dependencies The container bliss Containers are basically lightweight kits that include pieces of software ready to run the tasks in your pipeline. When containers are used as part of the pipeline, they can include all the dependencies: code, runtime, system tools, system libraries, settings. With containers, your software will run the same pipeline no matter what is your environment. You can run the same container in development and staging environments without opening Pandora’s box. Containers are the way to be consistent in your CI/CD and releasing/provisioning practices. Other advantages of containers are: Containers can be versioned Containers can hold the most relevant DevOps infrastructure Containers are cheap and fast to run Ops can let Dev drive CI/CD safely (by giving Devs templatized containers) Clarive and Docker: what a combo! Docker is therefore a great companion to your DevOps stack. Docker containers allow your project and repository rulebooks to run pipelines alongside any necessary infrastructure without requiring additional software packages to be installed in the Clarive server. Clarive runs your DevOps pipelines within managed containers. By using containers in Clarive you can: Isolate your users from the server environment so that they cannot break anything. Version your infrastructure packages, so that different versions of an app can run different versions of an image. Simplify your DevOps stack by having most of your build-test-deploy continuous delivery workflows to run in one server (or more, if you have a cluster of Clarive servers), instead of having to install runners for every project everywhere. Clarive and Docker flowchart Curating a library of DevOps containers Using a registry is a good way of keeping a library of containers that target your continuous delivery automation. With Clarive you can maintain a copy of a local registry that is used exclusively for your DevOps automation. Defining “natures” Each repository in your project belongs or implement one or more natures. The nature of your code or artifacts define how they are going to be implemented. A nature is a set of automation and templates. These templates can use different Docker containers to run. For example, your application may require Node + Python, so two natures. If you have these natures in templates they will be consistent and will help developers comply to a set of best practices on how to build, validate, lint, test and package new applications as they move to QA and live environments. Running commands on other servers Clarive uses Docker for running shell commands locally. That guarantees that rulebooks (in the projects .clarive.yml file) will not have access to the server(s) running your pipelines. But you can still run shell commands in other servers and systems, such as Linux, Windows, various Unixes flavors and other legacy systems (including the mainframe!) using the host: option in the shell: command. How do I use my own containers If the container is not available on the Clarive server, the Clarive rulebook downloads the container from Docker Hub. So, to use your own containers, you have two options: Upload them to Docker Hub. Then use them from your rulebook. Clarive will download it on the first run. Install it on your Clarive server. On the first run Clarive will build another version of your container based on Clarive’s default Dockerfile, called clarive/your container. You don’t need to prefix clarive/ into the name, that’s done for you automatically. Manage all active Docker containers in your pipeline from within Clarive Getting started today Using containers is an important step in implementing a continuous delivery and continuous deployment process that is streamlined and avoids environment clutter. Head over to our 30-day trial and let Clarive to run your DevOps automation in docker containers for better consistency and easy setup of your temporary environments. Learn more about Clarive Docker admin interface with this blog post and learn how to manage containers and docker images.
Scheduling your pipeline stages Running pipeline jobs to critical environments often requires a scheduled execution to take place. Clarive scheduled jobs run always in 3 main steps, called “Pre”, “Run” and “Post”. Why run a pipeline in phases? Most of the time the entire deployment job should not run all of its tasks at the scheduled time but as soon as the job is scheduled. There are several phases or stages to every scheduled deployment, most of which can run as early as it’s scheduled. Tasks such as building, packaging, testing and even provisioning infrastructure can take place earlier if they do not impact on the productive environments. When defining a pipeline, always think of what can be detected in earlier phases so that the evening deployment run will not fail on something that could have been checked previously. Separating your pipeline into different stages Pipeline phases Here are the different pipeline deployment phases you can run in Clarive. Deployment preparation (PRE) The deployment pipeline will take care of: Creating the temporary deployment directory Identifying the files changed to be deployed (BOM) Checking out the files related to the changesets to be deployed from either code repositories (code) or directly attached to the changesets (SQL). Renaming environment specific files (i.e web{PROD}.properties will be used just for deployments to the PROD environment) Replacing variables inside the files (i.e. it will replace ${variable} with the real value of the variable configured for the project affected for the target environment) Nature detection Clarive will identify the technologies to deploy analyzing the BOM Common deployment operations Deployment operations (RUN) Actual deployment operations will be executed in this phase of the Clarive job (movement of binaries, restart of servers, check the proper installation, etc.) Deployment operations Post-deployment operations (POST) After the deployment is finished Clarive will take care of any task needed depending on the final status. Some of the typical operations performed then are: Notify users Transition changesets states Update repositories (i.e. tag Git repositories with an environment tag) Synchronize external systems Cleanup temporary directories locally and remotely Synchronize related topics statuses (defects, releases, etc.) Post deployment operations Recap Regardless of if you use Clarive or not, when defining your deployment pipelines always think on these three stages or phases. PRE – everything that can run as soon as it has been scheduled RUN – crucial changes that run at the scheduled time POST – post activities or error recovery procedures Happy DevOps! Try Clarive now and start with your DevOps journey to continuous delivery.