Clarive 7 went through a complete UI/UX revamp Since last summer we’ve embarked on a complete revamp of our user interface and experience. The main drive to improve UI and UX was to make Clarive easier for new users. We’ve realized, after some careful user research, that only advanced users were using the interface with fluidity. They would navigate it’s internal tabs and easily find the information to build releases and deploy changes. Most users however had slower reaction times and took a long time to find information located in our application menus, in special when working outside the main flow they were used to. It was time for a revamp. User research As our team went through a Google Residency, we’ve amped up our game when it comes to Interface research. Applying the Design Sprint philosophy helped us improve our UI design process. After a few good design sprints = great UI/UX Project-based interface One of the most notable features of Clarive is it’s multi-application, multi-project navigation. Most Dev and Ops tools have a project/application/repository selector as a starting point. We were always concerned about giving our release managers a way to drive their releases across many applications. But most day-to-day users, mostly developers, work on one project or application at a time. We were forcing them to gaze through lots of information that were not in the right place to start with. The Clarive UI now always has a project in context Clarive is now both single project (see the project selector) and multi-project thanks to the Explore selector. We are working on two more views, first on the user-mode views, where the user will see all of their project data that pertains to them, and the project group view later, which will be a single mode project scope. Oh tabs! One of the most noticiable things we got rid of are the built in tabbed interface. This feature has been part of Clarive since its inception and early versions that ran on IE6 and it was a tough call to make. The old tabbed interface in Clarive has been gone for a while The reason we had a tabbed interface was that our users needed to deal with a lot of information and, if you recall from a long time ago (Microsoft) browsers did not have tabs in them. We’ve dropped IE6 support a long time ago, but the tabbed interface persisted… until recently. Clarive is now browser-tab optimized, as we dropped our old single-app tabbed interface Some of our power users will miss having tabs, but to be honest we ourselves were annoyed by how the interface would become very busy very quickly, difficult to find your way around and non-standard as URLs did not correspond to the UI state at a given point in time. React to this The new interface is slick! We’ve migrated the layout and most of our UI components to React. It’s been a fun and rewarding process. We tried Vue.js component interface first, which was good, but we just found React components and JS-focused architecture and style was more familiar to the team. The statuses admin UI has the new, reactified look We are used to building visual components straight in Javascript, and JSX seemed like a more natural and intuitive fit. We are also in love with Ant Design’s library of components that were ready to use and easy to style. Pubsub and logging streams With our revamped pubsub server, UI push updates happen on the fly and make the interface even more reactive. So no need for autorefresh anymore. We’ve also took the opportunity to reformat and restyle the log output stream, with terminal ANSI colors and push updates. Job streaming with push updates and ANSI terminal colors The Topic Grid The topic grid, also known as a topic list or table, has been completely redone in the just released version 7.4.0. We cover that in another blog post, but that also has been an exercise in interface design. Topics can now be filtered in queried in new ways We’ve also improved topic querying and filtering, with new revamped filters that are also reflected into the URL as query parameters. That way you can just use your browser bookmarks as a replacement for the deprecated favorites. Admin users and roles We’re still in the process of updating some of our admin interfaces, but for now the most significant changes come in the Admin users and role/action configuration. User administration also went through a good usability review Shields We also wanted to give our users more standard visual cues for the status of a release or changeset build, testing and deploy. For that we’ve picked https://shields.io/ as the reference implementation and now it looks like this: Shields now give users indications on the status of build, test and deploying It should make it much easier for our users to get a feeling of how their changeset is progressing through the delivery lifecycle. Deprecation: Favorites Besided dropping our tabbed our tabbed interface in favor of single page design, we’ve removed the Favorites section, since now you can use the browser favorites instead. Favorites were a slapped on fix to the old tabbed system, since tabs did not have cannonical URLs most of the time. More to come Watch for more UI revamps, as we’re looking now into these interfaces for the upcoming 7.6 series: Job monitoring and job dashboard Release planning redo Inline topic editing (without a separate window) Report designer The above should be available later during this year. In the roadmap for next year, most likely the 7.8 series, there will be a rule designed revamp, as we’re implementing project-based rules inspired on our YAML rulebooks (which are already project based). That’s all for now, I hope you enjoy working with our new UI as much as we do.
Bringing DevOps to the Mainframe (4/4) In this last installment of this series, we will review how Clarive can replace z/OS SCM tools such as CA Endeavor or Serena ChangeMan with a global DevOps pipeline that can drive unified deployments across all platforms. Source code versioned and deployed by Clarive Clarive can deploy source code managed outside the mainframe. Selecting elements to deploy In this article, z/OS artifacts (programs, copy books, JCLs, SQLs, etc.) can be versioned in Clarive’s Git, but it could be done with any other VCS for that matter. The developer will select the versions of elements to deploy from the repository view attaching it to the Clarive changeset. Versions associated to changesets Preparing mainframe elements Clarive will checkout the selected version of the source code to deploy in the PRE step of the deployment job and will perform the needed activities to check the code quality (i.e. execute static code analysis, check vulnerabilities, etc.) and to identify the type of compilation to be executed (i.e. decide the type of item depending on the naming convention, parse the source code to decide if DB2 precompilation is needed, etc.). Depending on the elements to deploy, different actions will be executed: Copy books, JCLs and all other elements that don’t need compilation will be shipped to the destination PDSs Programs will be precompiled and compiled as needed and the binaries will be kept in temporary load PDSs Clarive rule will decide what JCL template will be used to prepare/deploy each type of element and will submit the JCL after replacing the variables with their actual values depending on the deployment project and environment. Different z/OS element natures Deploying elements Depending on the elements to deploy, different actions will be executed: Programs will be shipped to the destination PDSs and binded as needed. A Clarive rule will decide what JCL template will be used to deploy each type of element and will submit the JCL after replacing the variables with their actual values depending on the deployment project and environment. Deploy and bind examples As usual, Clarive will keep track of any nested JCL jobs that may run associated with the parent JCL. Rollback Clarive will start a rollback job whenever an error condition occurs in the rule execution. It will automatically check out and deploy the previous version of the elements available in the source repository. Conclusion and the next steps In this DevOps for the Mainframe series, we have exposed the key features of Clarive for bringing mainframe technologies into the full, enterprise-wide continuous delivery DevOps pipeline. Once an organization has decided to modernize mainframe application delivery, there is a set of recommended steps: Establish Prerequisites The first step IT leaders need to take before modernizing mainframe application delivery is to evaluate whether the correct prerequisites are in place or in progress. To successfully implement a mainframe application delivery tool like Clarive requires either an existing process or the will to implement one. Assess Operational Readiness Many organizations discover too late that they have underestimated— sometimes dramatically—the investment needed in people, processes, and technology to move from their current environment for modernizing mainframe application delivery. The early readiness assessment is essential to crafting a transition plan that minimizes risk and provides cross-organizational visibility and coordination for the organization’s cloud initiatives. Many organizations already have some sort of mainframe delivery tools in place. When key processes have been defined within such a framework, optimizing and transforming them to an enterprise-wide delivery is significantly easier, but still need to be integrated into a single Dev to Ops pipeline, as mainframe delivery requests typically tend to run outside the reach of release composition and execution. Prepare the IT Organization for Change This concludes our blog series on deploying to the mainframe. IT leaders should test the waters to see how ready their own organization is for the change the way the mainframe application delivery processes fit into the picture. IT managers must communicate clearly to staff the rationale for the change and provide visibility into the impact on individual job responsibilities. It is particularly important that managers discuss any planned reallocation of staff based on reductions in troubleshooting time to alleviate fears of staff reductions. In this series we reviewed many different aspects for fully bringing your mainframe system up to speed with your enterprise DevOps strategy: Define the critical capabilities and tooling requirements to automate your mainframe delivery pipeline. Decide where your code will reside and who (Clarive or a mainframe tool) will drive the pipeline build and deploy steps. Integrate the pipeline with other functional areas, including related services, components and applications, so that releases will be a fully transactional change operation across many systems and platforms. We hope you enjoyed it. Let us know if you’d like to schedule a demo or talk to one of our engineers to learn more about how other organizations have implemented the mainframe into the overall delivery pipeline. Other posts in this series: Bringing DevOps to the Mainframe pt 1 Bringing DevOps to the Mainframe pt 2: Tooling Bringing DevOps to the Mainframe pt 3: Source code versioned in z/OS
Bringing DevOps to the Mainframe (3/4): Source code versioned in z/OS In this installment of this series we will review how Clarive drive popular z/OS SCM tools such as CA Endevor or Serena ChangeMan as part of your global DevOps pipeline. Source code versioned in z/OS (Endevor or Changeman) In this scenario, the source code to be deployed resides in the mainframe. Selection of objects to deploy Clarive will allow the development user to attach z/OS packages (Endevor orChangeMan) to the changesets for further processing. z/OS packages z/OS repository (Changeman ZMF) Preparing Payload Elements Clarive rules will define the logic to prepare the z/OS packages by executing either online or batch operations that are needed to prepare them (freeze,inspect, generate, execute, etc.) Inspect package operation (Endevor) Deployment of the elements The deployment of the packages will be executed by Clarive by submitting the specific JCL to ship/promote/approve the packages included in the job changesets. Ship package (Endevor) Endevor Ship package sample output Rollback Clarive rule will allow the administrator to define the way to rollback changes. In either Endevor and ChangeMan ZMF it will execute a Backout operation on each package included in the job. Backout package (Endevor) Conclusion Endevor and Changeman are powerful version control tools used in mainframe environments. Fundamentally, they have similar or equivalent workflows for the software configuration lifecycle but often these tools are used independently of the overall enterprise DevOps pipeline. DevOps implementations often leave them out, however they remain critical to deliver and run the critical areas of the business with much newer technologies. With its mainframe orchestration capabilities, Clarive enables organizations with either tool to build better integrated pipelines that brings mainframe changes into the main Dev to Ops stream. Stay tuned for the forth and final installment of these series! Read the previous post of this series and learn about the mainframe features you can find in Clarive and how they are integrated into the system.
Bringing DevOps to the Mainframe (2/4): Tooling In this second part of this blog series we will detail the mainframe features you can find in Clarive and how they are integrated into the system. Clarive Mainframe Features Clarive manages all aspects of the z/OS code lifecycle: Sending files to z/OS partitions Character translation maps and codepages Identify relationships – impact analysis JCL Template Management Submit JCL Nested JCL Management and synchronous – asynchronous queue control Retrieve Job Spool output and parse results Integration rules Clarive z/OS features 3 entirely different integration points with the mainframe. Each integration feature serves a specific purpose Job queue access – to ship files and submit jobs into the z/OS job queue in batch mode. Clarive will track all nested jobs and parse results into the job tree. ClaX Agent – for delivering files into Datasets and/or OMVS partitions and executing z/OS processes online. This is the preferred way of running REXX scripts sent from Clarive to the mainframe. Access z/OS facilities such as SDSF®,ISPF®, VSAM® data records or RACF®. Webservices Library – for writing code that initiates calls from the mainframe directly into Clarive using TCP/IP sockets and the RESTful webservices features of the Clarive rules. Clarive to Mainframe Integration Point Tool considerations Clarive is a tool that allows enterprise companies to implement an end-to-end solution to control the software lifecycle providing countless out-of-the-box functionalities that help solving any complex situation (automation, integration with external tools, critical regions, manual steps through the process, collaboration, etc.) CCMDB – Configuration Items In Clarive any entity that is part of the physical infrastructure or the logical lifecycle is represented as a configuration item (CI). Servers, projects/applications, sources repositories, databases, users, lifecycle states, etc. are represented as CIs in Clarive under the name Resource. Any resource can have multiple relationships with other resources (i.e. an application is installed in one server in production, a user is developer of an application, the Endevor “system x/subsystem y” combination is the source code repository related to an application, etc.) The graph database made of this entities and relationships is Clarive’s Change oriented Configuration Management Database (CCMDB). The CCMBD is used to keep the whole system configuration as well as to do impact analysis, infrastructure requests management, etc. CCMBD navigation Natures / Technologies Clarive natures are special CIs in Clarive that automate the identification of technologies to be deployed by a deployment job. A nature can be detected by file path/name (i.e. Nature SQL: *.sql), by project variable values (i.e. ${weblogic}:Y) or by parsing the changed files code (i.e. COBOL/DB2: qr/EXEC SQL/) Natures list JES spool monitoring Clarive will take care of downloading and parsing the spool output when submitting a JCL in z/OS to split the DDs available in the output and to identify and use the return codes of all steps executed in the JCL. JES output viewer Calendaring / Deployment scheduling – Calendar slots Any deployment job in Clarive will be scheduled and Clarive will provide the available slots depending on the infrastructure affected by the deployment. Infrastructure administrators can define these slots related to any CCMDB level (environment, project, project groups, server, etc…) Calendar slots definition Rollback Clarive rule will allow the administrator to define the way to rollback changes. In either Endevor and ChangeMan ZMF it will execute a Backout operation on each package included in the job. Rollback control Next Steps Features are an important step when picking the right tool to bring DevOps to the mainframe. In the next 2 installments of this series we will review how Clarive can deploy mainframe artifacts (or elements), either by driving popular z/OS SCM tool such as CA Endevor or Serena ChangeMan, or replacing them with Clarive’s own z/OS deployment agents. Read the first post of this series and learn more on how to bring DevOps to the mainframe.
Bringing DevOps to the Mainframe (1/4) The DevOps movement in general, tends to exclude any technologies that are outliers to the do-it-yourself spirit of DevOps. This is due to the nature of how certain technologies are closed to developer-driven improvements, or roles are irreversibly inaccessible to outsiders. That’s not the case in the mainframe. The mainframe is armed with countless development tools and programmable resources that rarely failed to enable Dev to Ops processes. Then why DevOps practices have not prospered in the mainframe? Ops are already masters of any productive or pre-productive environments – so changing the way developer teams interact with those environments require more politics than technology and are vetted by security practices already in place. New tools don’t target the mainframe – the market and open source communities have focused first on servicing Linux, Windows, mobile and cloud environments. Resistance to change – even if there were new tools and devs could improve processes themselves, management feels that trying out new approaches, especially those that go “outside the box”, could end up putting these environments, and mission-critical releases at risk. Organizations want to profit from DevOps initiatives that are improving the speed and quality of application delivery in the enterprise at a vertiginous pace. But how can they leverage processes that are already in place with the faster and combined pipelines setup in the open side of the house? Enter Clarive for z/OS Our clients have been introducing DevOps practices to the mainframe for many years now. This has been made possible thanks to the well-known benefits of accepting and promoting the bimodal enterprise. There are two approaches that can be used simultaneously in accomplishing: Orchestrate mainframe tools and processes already in place – driving and being driven by the organization’s delivery pipeline Launch modernization initiatives that change the way Dev and Ops deliver changes in the mainframe Business Benefits bringing DevOps to the Mainframe The benefit is simple. Code that runs in the mainframe is expensive and obscure. By unearthing practices and activities, organizations gain valuable insight that can help transform the z/OS-dependent footprint into a more contained and flexible part of the pipeline with these key benefits: Coordinate and Speed-up Application Delivery Mainframe systems don’t run in isolation. The data it manages and the logic it implements are shared as a single entity throughout the enterprise by applications in the open, cloud and even mobile part of the organization. Making changes that disrupt different parts of this delicate but business-critical organism needs to be coordinated at many phases, from testing to UATs to production delivery. Delivering change as a single transactional pipeline has to be a coordinated effort both forward and backwards. End-to-End Visibility DevOps practices perceive the mainframe as a closed box that does not play well with activities that target better visibility and end-to-end transparency. Having dashboards and reports that can work as input and output from the mainframe release processes into other pipelines will help deliver change. Run a Leaner Operation and Avoid Waste Creating mainframe processes that are part of the bigger picture help determine where constraints may lay and what parts of the pipeline may be deemed obsolete or become bottlenecks. Lower Release Costs Mainframe tools are expensive and difficult to manage. MIPS and processing in the mainframe may be capped and new processes could create unwanted expenses. Relying more on tools that drive the mainframe from Linux may in return translate into significant per release cost savings, encouraging a more continuous release process. Use Cases The following is a list of the most relevant benefits of Clarive z/OS and popular use cases that our clients have implemented using the Clarive z/OS platform and tools: Compile and link programs using JCL preprocessed templates. Deploy DB2 items directly to the database. Compile related COBOL programs when Copybooks change Total control what is deployed to each environment at a given time Schedule jobs according to individualized release and availability calendars and windows Request approval for critical deployment windows or sensitive applications or items Keep the lifecycle in sync with external project and issue management applications Run SQA on the changes promoted. Block deployment if a minimum score has not been reached Reliably rollback changes in Production, replacing previous PDS libraries with the correct ones Provision CICS resources on request by users Stay tunned for more of these DevOps for mainframe blog series! Try Clarive now and start bringing DevOps to the mainframe now.
Clarive 7.0.13 released with custom kanban card layouts This release contains a lot of minor fixes and improvements from 7.0.12. It is also focus on refactoring interface improving the kanban boards. Git repositories navigation on a tab In Clarive 7.0.13 you can find a new Git repository navigation panel completely refactored. You can view sources, navigate branches and tags, compare references and much more. To access the new interface, just navigate to the project in the left panel, expand it and click on the repository node. Load default data by profile Now any Clarive profile (a profile is a predefined set of topic categories, rules and roles that can be loaded in Clarive) can include default data as part of it. ClariveSE profile now includes a sample-html project and two releases with several changes on them. It also automates the launch of 3 deployment jobs to INTE, TEST, and PROD. To get the profile and the default sample data installed, execute cla setup <profile> and answer yes to the question Load default data?. Once you start the Clarive server it will automatically load the profile and the default data. Kanban Board improvements Custom card layout You can now configure the layout of the cards of your Kanban Boards to show the information that you really want to focus on. To configure the layout, go to the board Configuration and select Cards Layout. Auto refresh In the Quick View options panel (click on View button), now you’ll find a switch to toggle the Auto Refresh for this board. It will be updated with changes in the topics shown whenever the board tab is activated. Save quick view by user In Clarive 7.0.13 the options selected in the quick view menu will be saved locally in your browser storage so every time you open the board it will use the last swimlanes, autorefresh, cards per list, etc. configuration you used. Predefined statuses by list Whenever you create a new board, it will be created with three default lists and now it will assign default statuses to these three lists with the following rules: New: Initial statuses In Progress: Normal statuses Done: Final and Cancelled statuses Killtree when job is cancelled One of the most important improvements of Clarive 7.0.13 is the ability to kill/cancel the remote processes being executed by a job when this is canceled from the interface. You can read about this new feature in this blog post Improvements and issues resolved [ENH] Git repositories navigation on a tab [ENH] Clax libuv adaptation [ENH] NPM registry directory new structure [ENH] Add rulebook documentation to service.artifacts.publish [ENH] Return artifact url on publish [ENH] Invite users to Clarive [ENH] Load default data by profile [ENH] Users can choose shell runner for rulebooks [ENH] Kill job signal configured in yml file [ENH] Add default workers configuration to clarive.yml file [ENH] Boards shared with “ALL” users [ENH] Kanban custom card fields [ENH] Killtree when job is cancelled [ENH] Kanabn boards auto refresh [ENH] Make sure to save kanban quick view session [ENH] Filter data according to filter field in Topic Selector fieldlet [ENH] Make sure new created boards have default lists [ENH] Add date fields to card layout configuration [FIX] Check user permissions in service.topic.remove_file [FIX] Make sure user with permissions can access to rule designer [FIX] Make sure CI permissions are working correctly [FIX] Make sure that the ci grid is updated after the ci is modified [FIX] Control exception when running scripts. [FIX] Change project_security structure on user ci [FIX] User without project field permissions can edit the topic [FIX] Make sure React apps work in IE 11 [FIX] Show cis in create menu (standard edition) [FIX] Administrator should be able to delete artifacts in ClariveSE [FIX] When publishing NPM packages with scopes tarball is empty [FIX] Make sure default values from variables are used when adding them [FIX] Make sure notifications are sent only to active users [FIX] Make sure to show username in “Blame by time” option for rules versions [FIX] Remove default values when changing type of variable resource [FIX] Allow single mode in variables resources [FIX] Escape “/” in URLs for NPM scoped packages from remote repositories [FIX] Avoid console message when opening a variable resource with cis set as default values [FIX] Regexp for scoped packages should filter ONLY packages, not tgzs [FIX] Refresh resources from url [FIX] Create resource from versioned tab [FIX] Make sure remote script element always display a final message [FIX] Save variable when deleted default value field in a variable resource [FIX] Make sure topic’s hidden fields are available as topicfields bounds [FIX] Save resource when it does not have to validate fields [FIX] Make sure projects can be added as kanban swimlanes [FIX] Make sure changeset with artifact revision attached can be opened [FIX] Make sure narrow menu repository navigation show changes related to branch [FIX] Formating event data if fail service used [FIX] Make sure that the chosen element is always selected in the rule tree. [FIX] Reload data resource when refreshing [FIX] Job distribution and las jobs dashlets should filter assigned projects to user [FIX] Make sure user combo not have avaible grid mode in topic. [FIX] Make sure that system user are showed in combo users [FIX] Display column data in edition mode for a Topic Selector fieldlet in a topic [FIX] Filter projects in grids by user security [FIX] Make sure in topic selector combo all height size are available [FIX] Ship remote file: show log in several lines [FIX] Skip job dir removal in rollback [FIX] Remove FilesysRepo Resource [FIX] Remove permissions option from user menu [FIX] Make sure when maximized description and choose back in the browser screen layout are showed well [FIX] Remove session when user get deactivated [FIX] Resources concurrency [FIX] Validate CI Multiple option just with type ci variables [FIX] Resource not saved when validation fails [FIX] Make sure that the combos search has an optimal performance. [FIX] Make sure ldap authentication returned messages are available in stash [FIX] Show date and time in fieldlet datetime [FIX] User session should not be removed on REPL open [FIX] User with action.admin.users should be able to edit users [FIX] Make username available in dashboard rules execution [FIX] Make sure collapsing lists saved in user session correctly Ready to upgrade? Just follow the standard procedure for installing the new version. Click here to get it from our Install page. Acknowledgments Join us in our Community to make suggestions and report bugs. Thanks to everyone who participated there. Try Clarive now and start improving your DevOps practices.
Kill remote processes on job cancel Clarive 7.0.13 introduces a new feature that allows remote jobs to be killed when a pipeline job is cancelled. Normally, pipeline job cancelation will only end processes local to the Clarive server and keep remote processes running. This was working as designed, as we did not intend to nuke remote processes inadvertently. This is an interesting subject that we think could be of use within or outside the scope of Clarive, and may be useful if you’re wondering how to interrupt job pipelines while they’re running, or killing scripts running remote processes. Why remote processes Pipeline job remote execution starts remote processes using one of our 3 communication agents/transports: SSH, ClaX (lightweight push agent) and ClaW (lightweight pull-worker). This article is specific about the SSH transport, as it’s more generic, but it applies also to ClaX and ClaW. When a pipeline kicks off a remote job, Clarive connects to a remote server and starts the command requested. The connection between the Clarive server and the remote machine blocks (unless in parallel mode) and remains blocked for the duration of the remote command. Here’s a rulebook pipeline example: do: shell: host: user@remserver cmd: sleep 30 The above example will block wait 30 seconds for the remote sleep command to finish. During the execution of the command, if we go to the remote machine and do a ps -ef, this is what we’d find: user 12042 12012 0 07:47 ? 00:00:00 sshd: user@notty user 12043 12042 0 07:47 ? 00:00:00 sleep 30 Most remote execution engines do not track and kill remote processes. The issue of killing the remote processes and giving the user feedback process (or UI) is present in DevOps tools from Ansible to Gitlab to many others. https://gitlab.com/gitlab-org/gitlab-ce/issues/18909 Currently killing a job will not stop remote processes Killing the remote parent process Before this release, canceling a job would end the local process and But you can do the same from the Clarive server with the SSH client command ssh: clarive@claserver $ ssh user@remserver sleep 30 Killed: 9 Now if we killed the server process – with Clarive’s job cancel command or with a simple Ctrl-C or even a kill -9 [pid] through the SSH client: clarive@claserver $ ssh user@remserver sleep 30 Killed: 9 That typically does not work, as the children processes will remain alive and become children of the init process process id 1. This would be the result on the remote server after the local process is killed or the Clarive job canceled: user 12043 1 0 07:47 ? 00:00:00 sleep 30 The sshd server process that was overseeing the execution of the remote command terminates. That’s because the socket connection has been interrupted. But the remote command is still running. Pseudo-TTY A way to interrupt the remote command could be the use of the ssh -t option. The -t tells the SSH client to create a pseudo-TTY, which basically means tells SSH to make the local terminal a mirror of what a remote terminal would be, instead of just running a command. If have never used it, give it a try: $ ssh -t user@remserver vim /tmp/ It will open vim locally as if you had a terminal open on the remote machine. Now if you try to kill a process started with -t using Ctrl-C, the remote sshd process will terminate the children process as well, just like when you hit Ctrl-C with a local process. $ ssh -t user@remserver sleep 30 ^C Connection to remserver closed. No remote processes remain alive after the kill, and sleep 30 disappears on remserver. However, this technique does not solve our problem, due to the fact that pipeline jobs are not interactive, so we cannot tell the ssh channel to send a remote kill just by setting up a pseudo-tty. The kill signal will only impact locally and on the remote sshd and not be interpreted as a user manually hitting the Ctrl-C key. The solution: tracking and pkill The way to correctly stop remote processes when pipeline jobs are cancelled is to do it in a controlled fashion: 1) Clarive job process starts remote command and keeps the connection open 2) Clarive job is canceled (by the user normally, through the job monitor) 3) Clarive creates a new connection to all servers where commands are being executed 4) A pkill -[signal] -p $PPID command is sent through the same sshd tunnel 5) The pkill will kill the parent remote sshd process and all it’s children, also called the process tree That way all the remote processes are stopped with the job cancel. Successfully killing remote processes will kill the full remote tree Picking a signal Additionally, we’ve introduced control over the local and remote signals to send to end the processes. You may be interested in sending a more stern kill -9 or just a nice kill -15 to the remote process. Clarive will not wait for the remote process to finish since, as we have witnessed many times, certain shutdown procedures may take forever to finish, but it does have a timeout on the local job process that are running and who may be waiting for the remote process to finish. The following config/[yourconfig].yml file options are available: # kill signal used to cancel job processes # - 9 if you want the process to stop immediately # - 2 or 15 if you want the process to stop normally kill_signal: 15 # 1|0 - if you want to kill the job children processes as well kill_job_children: 1 # signal that will be sent to remote children kill_children_signal: 15 # seconds to wait for killed job child processes to be ripped kill_job_children_timeout: 30 Why killing remote processes is important When we get down to business, DevOps is as much about running processes on remote servers, cloud infrastructure and containers as it is about creating a culture that promotes a do-IT-yourself empowered culture. If you are building DevOps pipelines and general remote process execution and want to stop it midway through for whatever reason, it’s important to have a resilient process tree that is tracked and can be killed when requested by the master process. Happy scripting! Get an early start and try Clarive now. Install your 30-day trial here.
Serverless Deployment with Clarive Check out how to get started with a complete Lambda delivery lifecycle in this blog post Today we’ll take a look at how to deploy a Lambda function to AWS with Clarive 7.1. In this example there is also some interesting ideas that you can implement in your .clarive.yml files to manage your application deployments, such as variables that can be parsed by Clarive. Setup Add the following items to your Clarive instance: A Slack Incoming Webhook pointing to your Slack account defined webhook URL (check https://api.slack.com/incoming-webhooks) 2 variables with your aws credentials : aws_key (type: text) and aws_secret (type: secret) Slack is actually not mandatory to run this example, so you can just skip it. You can also just hardcode variables into the .clarive.yml file but you would be missing some of Clarive’s nicest features: variable management 😉 Create your 2 AWS variables in Clarive Head over to the Admin Variables menu to setup the aws_key and aws_secret variables: Setup your AWS Lambda credentials with Clarive variables As the variable type field indicates, secret variables are encrypted into the Clarive database. Create a secret variable to store your AWS credentials .clarive directory contents As you can see in the following .clarive.yml file, we’ll be using a rulebook operation to parse the contents of a file: - aws_vars = parse: file: "{{ ctx.job('project') }}/{{ ctx.job('repository') }}/.clarive/vars.yml" In this case we’ll load a file called vars.yml from the .clarive directory in your repository and the variables will be available in the aws_vars structure for later use, i.e. {{ aws_vars.region }}. If you have a look at that directory, there is one vars.yml for each environment. Clarive will use the correct file depending on the target environment of the deployment job. The .clarive.yml file The .clarive.yml file in your project’s repository is used to define the pipeline rule that will execute during CI/CD, building and deploying your Lambda function to AWS. This pipeline rule will: replace variables in the Serverless repository files with contents stored in Clarive run the serverless command to build and deploy your Lambda function notify users in a Slack channel with info from the version and branch being built/deployed Slack plugin operation in use For posting updates of our rule execution to your Slack chat, we’ll use the slack_post operation available in our slack plugin here. With Clarive’s templating features we’ll be able to generate a more self-descriptive Slack message (also called a payload): - text =: | Version: {{ ctx.job('change_version') }} Branch: {{ ctx.job('branch') }} User: {{ ctx.job('user') }} Items modified: {{ ctx.job('items').map(function(item){ return '- (' + `${item.status}` + ') ' + `${item.item}`}).join('\n') }} - slack_post: webhook: SlackIncomingWebhook-1 payload: attachments: - title: "Starting deployment {{ ctx.job('name') }} for project {{ ctx.job('project') }}" text: "{{ text }}" mrkdwn_in: ["text"] You can play around with it to experiment with different formats and adding or removing contents to the payload at will. Replacing variables in your source code We use the sed operation in the build step: - sed [Replace variables]: path: "{{ ctx.job('project') }}/{{ ctx.job('repository') }}" excludes: - \.clarive - \.git - \.serverless This will parse all files in the specified path: and replace all {{}} and ${} variables found. You can find a couple of examples in the handler.js file in the repository. Docker image Our rule uses an image from https://hub.docker.com that has the Serverless framework already installed: - image: name: laardee/serverless environment: AWS_ACCESS_KEY_ID: "{{ ctx.var('aws_key') }}" AWS_SECRET_ACCESS_KEY: "{{ ctx.var('aws_secret') }}" See that we set the environment variables needed for the Serverless commands to point to the correct AWS account. Operation decorators Some of the operations you can find in the sample .clarive.yml file are using decorators such as: [Test deployed application]. Decorators are used in the job log inside Clarive instead of the name of the operation. It makes it easier for the user to read the job log if operations have textual information describing what it is doing. This is specially true with longer pipelines and complex rules. Clarive job log message decorator in action You can actually also use variables in decorators to make them more intuitive for the user! The full .clarive.yml file is available on our Github instance: https://github.com/clarive/example-app-serverless/blob/master/.clarive.yml Building and deploying your Serverless app First, use this Github repository cloned into a new or an existing project (i.e project: serverless, repository: serverless) Clone the new clarive repository from your git client: git clone http[s]://<your_clarive_instance_URL>/git/serverless/serverless Create a new topic branch. Here we’ll be tying our branch to a User Story in Clarive: cd serverless git branch -b story/to_github Now commit and push some changes to the remote repository and go to Clarive monitor. It should have created a new user story topic for you and automatically launched the CI build: Serverless CI/CD job with Clarive From here forward, you can start the build and deploy lifecycle, including deploying to other environments (ie. Production or QA) and other deployment workflow. Just setup different variable values for each environment, so that the CI/CD pipeline will deploy to the corresponding environment when the time comes. Enjoy!!! Get an early start and try Clarive now. Install your 30-day trial here.