This video shows how Clarive Enterprise Edition (EE) can deploy War files to Tomcat, .Net applications to Windows servers and Mainframe Endevor packages with a Single Pipeline.


In the early stages, deployment is done for each technology separate(WAR, .NET, Mainframe) but for the production deployment, all 3 technologies are deployed to 3 different platforms with a single job.
All deployments are done with a SINGLE pipeline rule.

Watch the video here

This way consistency during deployment is guaranteed.

3 Technologies(Java, .Net, Cobol programs in Endevor packages, 3 platforms (Tomcat on Linux, MS Windows server and mainframe), 3 environments(DEV, QA, PROD) deployed with jobs generated by a SINGLE pipeline rule.

The Pipeline rule can be extended with other technologies(Siebel, Sap, mobile apps), addtional environments(User acceptance, PreProd) and addtional platforms(Ios, Android)

With this Clarive EE is offering and end to end view on the release process, with extended dashboarding capabilities and easy tool navigation…

Get the full insight here


Get an early start and try Clarive now. Get your custom cloud instance for free.



A third and final way to automate delivery I will discuss is rule-driven automation.


Rule-driven automation ties together event triggers and actions as the environment evolves from state A to state B when changes are introduced.

Rules understand what changes are being delivered when, where (the environment) and how.

Rules are driven by events and behavior and are fully transparent. Rules are also behind the simplest and most effective tools employed by users of all levels, from the popular IFTTT to MS Outlook, for automating anything from simple tasks to complex process. Why? Because rules are both easy to implement and to understand.

Let’s use again the analogy of software development to make the rule-driven concept clear. It reminds me of my university time when I was working with rule-based systems. At that time, we made the distinction between procedural and logical knowledge. Let me recap and explain both quickly.

Procedural knowledge is knowledge about how to perform some task. Examples are how to provision an environment, how to build an application, how to process an order, how to search the Web, etc. Given their architectural design, computers have always been well-suited to store and execute procedures. As discussed before, most early-day programming languages make it easy to encode and execute procedural knowledge, as they have evolved naturally from their associated computational component (computer). Procedural knowledge appears in a computer as sequences of statements in programming languages.

Logical knowledge on the other hand is the knowledge of “relationships” between entities. It can relate a product and its components, symptoms and a diagnosis, or relationships between various tasks for example. This sounds familiar looking at application delivery and dependencies between components, relationships between applications, release dependencies etc.

Unlike for factual and procedural knowledge, there is no core architectural component within a traditional computer that is well suited to store and use such logical knowledge. Looking in more detail, there are many independent chunks of logical knowledge that are too complex to store easily into a database, and they often lack an implied order of execution. This makes this kind of knowledge ill-suited for straight programming. Logical knowledge seems difficult to encode and maintain using the conventional database and programming tools that have evolved from underlying computer architectures.

This is why rule-driven development, expert system shells, rule-based systems using rule engines became popular. Such a system was a kind of virtual environment within a computer that would infer new knowledge based on known factual data and IF-THEN rules, decision trees or other forms of logical knowledge that could be defined.

It is clear that building, provisioning, and deploying applications or deploying a release with release dependencies assume a tremendous amount of logical knowledge. This is exactly the reason why deployment is often seen as complex. We want to define a procedural script for something that has too many logical, non-procedural knowledge elements.

For this reason, I believe that rule-driven automation for release automation and deployment has a lot of potential.
In a rule-driven automation system, matching rules react to the state of the system. The model is a result of how the system reconfigures itself:

Rule-driven automation is based on decision trees that are very easy to grasp and model, because they:

  • Are Simple to understand and interpret.” People are able to understand event triggers and rules after a brief explanation. Rule decision trees can also be displayed graphically in a way that is easy for non-experts to interpret.
  • Require little data preparation.” A model-based approach requires normalization into a model. Behaviors however can be easily turned into a rule decision tree without much effort. IF a THEN b.
  • Support full Decoupling.” With the adoption of service-oriented architectures, automation must be decoupled so that it is easy to replace, adapt and scale.
  • Are Auto scalable, replaceable and reliable.” Decoupled logic can scale and are safer to replace and continuously improve and deploy.
  • Are Robust.” Resists failure even if its assumptions are somewhat violated by variations in the environment.
  • Perform well in large or complex environments.” A great amount of decisions can be executed using standard computing resources in reasonable time.
  • Mirror human decision making more closely than other approaches.” This is useful when modeling human decisions/behavior and makes it suitable for applying machine learning algorithms.

The main features of rule-driven automation include:

  • Rules model the world using basic control logic: IF this THEN that. For every rule there is an associated action. Actions can be looped and further broken down into conditions.
    Rules are loosely coupled and therefore can execute in parallel and en masse without the need to create orchestration logic.
  • Rules are templates and can be reused extensively.
  • Rules can be chained and concurrency controlled.
  • Rules handle complex delivery use cases including decision and transformation.
  • The model is a result of how rules interact with the environment. Models and blueprints can also be used as input, but are not a requirement.

Get an early start and try Clarive now. Get your custom cloud instance for free.



It is incredible to see how much (and increased) attention DevOps is getting in organizations today.


As DevOps tool vendor, we welcome this of course, but at the same time it also confirms that successfully implementing DevOps within organizations is not as simple as it sometimes led to believe. The obvious question is then of course: Why?

In essence DevOps is about improved collaboration between Dev and Ops and automation of all delivery processes (made as lean as possible) for quality delivery at the speed of business. If you want to learn more about DevOps and how to implement it in bigger enterprises, take a look at the 7 Step Strategy ebook on our website.

End-to-end delivery processes can be grouped into 3 simple words: Code, Track, and Deploy.

  • Code” represents those delivery tasks and processes closely aligned with the Dev-side
  • Deploy” represents the delivery tasks and processes closely aligned with the Ops-side of the delivery chain
  • Track” is what enables improvement and better collaboration: tracking progress within the entire delivery chain. To make delivery processes lean, accurate, real-time, and factual process data is required to analyze, learn and improve.

Thinking about the delivery toolchain

Many colleagues have written about the cultural aspects of DevOps and its related challenges to implement DevOps within the organization. I concur their statements and am generally in agreement with their approach for a successful DevOps journey.

To change a culture and/or team behaviour though, especially when team members are spread globally, the organization needs to carefully think about its delivery toolchain. Why? Because a prerequisite for a culture to change, or even more basic simply for people to collaborate, is that people are able to share assets or information at real-time, regardless of location.

Reality in many organizations today is that people are grouped into teams and that each team has the freedom to choose their own tooling, based on platform support, past experience, or just preference. As a result, organizations very quickly find themselves in a situation where the delivery toolchain becomes a big set of disconnected tools for performing various code and deploy tasks. The lack of integration results in many manual tasks to “glue” it all together. Inevitably they all struggle with end-to-end tracking and a lot of time and energy is wasted on this. I often see this even within teams, because of the plenitude of tools they installed as delivery toolchain.

clarive application delivery

Clarive Lean Application Delivery

SIMPLICITY is the keyword here.

Funny enough this is a DevOps goal! Recall that DevOps is said to apply Lean and Agile principles to the delivery process. So why do teams allow product-based silos? Why do they look for tools to fix one particular delivery aspect only, like build, deploy, version control, test, etc.?

Instead of looking for (often open source) code or tools to automate a particular delivery aspect, a team or organization should look at the end-to-end delivery process, and __look for the simplest way to automate this, and importantly without manual activities!__

We believe that with workflow driven deployment teams can get code, track, and deploy automated the right way: simplified and integrated!

Workflow driven deployment will allow teams to:

  • Use discussion topics that make it simpler to manage and relate project activities: code branches are mapped 1:1 with their corresponding topic (Feature, User Story, Bugfix, etc.) making them true topic branches. This will provide strong coupling between workflow and CODE.
  • Track progress and automate deployment on every environment through kanban boards. Kanban Boards allow you to quickly visualize status of various types of topics in any arrangement. Within Clarive, Kanban topics can be easily grouped into lists, so that you can split your project in many ways. Drop kanban cards on a board simply into an environment to trigger a deployment. Simple and fully automated! This will provide strong coupling between workflow and DEPLOY automation onto every environment.
clarive kanban

The Clarive Kanban makes tracking progress easier

  • Analyze and monitor status, progress and timing within the delivery process. It even makes it possible to perform pipeline profiling. Profiling allows you to spot bottlenecks and will help you to optimize pipelines and overall workflow using execution profiling data. All data is factual and real-time! This will provide you with ultimate TRACK information within the delivery process.

The Right way to go

Why is workflow driven deployment the right way to go? Because it breaths the true objectives of DevOps: better and seamless collaboration between Dev and Ops with automation everywhere possible at the toolchain level, not only at the process/human level. This makes a big difference. I believe that a lot of companies continue to struggle with DevOps simply because they are shifting their collaboration and automation issues from their current processes and tools to a disconnected DevOps toolchain exposing similar and new problems. As a result, they become skeptical about the DevOps initiative …. And blame the toolchain for it!! (always easier than blaming people)

A DevOps platform that enables true workflow driven deployment blends process automation with execution automation and has the ability to analyze and track automation performance from start till end. This is what you should look for to enable DevOps faster: A simplified, integrated toolchain that gets the job done with transparency, so organizations can concentrate on the cultural and people related aspects.


Try Clarive now. Get your custom cloud instance for free.



Here our second post from our series on DevOps automation models. Model-driven delivery automation is based on predefined blueprints of the environments.


Blueprints model what needs to exist and where. Logic is then attached to the distinct components in the blueprint by defining how each component can be created, updated or decommissioned.

You see model-driven deployment as the preferred method used by many of recent Application Release Automation (ARA) and also some Configuration Management tools. The approach became popular in the early 2000s when Software Defined Data Centers and Virtual Machines became the new norm.

Models came as a welcome improvement over process systems. Using Model-driven systems, you need to understand and represent the environmental model first, then define how to deliver changes to it. By understanding the environment, model logic becomes segmented, and therefore reusable.

A higher abstraction

Let’s make another analogy to software development to make the model-driven concept clear. Around the early 2000s also Model-driven Development (MDD) became popular as alternative to traditional development. There are two core concepts associated with model driven development: abstraction and automation.

In MDD, the software application model is defined on a higher abstraction level and then converted into a working application using automated transformation or interpretations. The right model driven development approach leverages model execution at run time, where the model is automatically transformed into a working software application by interpreting and executing the model (removing the need to generate or write code). This means that executable code is “generated” automatically based on the model and transformation of this model based on the specific lower-level environmental settings. The higher the level of abstraction, the more likely reuse becomes possible. For this reason, a model driven development platform is often referred to as a high-productivity platform given the unprecedented speed at which developers can build and deploy new applications. This speed is derived from the use of models and other pre-built components that business and technical teams use to visually construct applications.

The approach described above can easily be mapped to application delivery as follows: Those responsible for environments, application components, and deployment processes are all able to work together, but they define and manage their own delivery related aspects separately. As such there is a clear separation of duties and abstraction is made for each area of focus:

  • Application model (e.g. application server, web server, database server, WAR, SQL, GEM, service bus, etc.)
  • Environmental model (e.g. public cloud (public, private), container platform, ERP packages, VM, OS, storage, network, security, etc.)
  • Process model (e.g. installation order, variable settings, dependencies, etc.)

The first step in a model-driven approach is therefore to define a blueprint of the environment:

Model-driven delivery automation is often split into three or more layers. Each layer is orchestrated/modeled separately, for example:

The main features of model-driven automation include:

  • Models the environment components and their relationships.

  • Higher level of abstraction.

  • Not tightly coupled.

  • Components are reusable.

  • Component orchestration process is detached.

model driven automation

Model driven automation requires complex orchestration to get even the simplest of things done

Pitfalls

Today, model-driven automation is important in and integral part of most configuration management tools and enterprise infrastructure automation software. Moreover, container technology such as Docker has models built into its metadata.

But this creates a challenge: in the realm of continuous delivery, modeling has become synonymous with duplication. In a world where containerized microservices and infrastructure-as-code has attained widespread adoption, the model is already embedded in the application or service being delivered. What is the point of having to implement another copy of it in your ARA tool?

On top of that, in hybrid platform environments, models are also hard to get started with. They require describing complex enterprise architectures and application relationships in a graphical view. It would be great to see the model of an environment that spans ERP packages such as SAP or Siebel or Salesforce, in combination with some mobile and mainframe LPARS…

Finally, previous work, such as scripts or processes, are harder to adapt to models, since we saw that scripts combine actions with correlated steps. So model-driven systems are tougher to migrate to if you want to retain investments already done.

In the next blog session, we will take closer look at rule-driven automation, a final alternative way to handle deployments which provides a solution to some of the challenges raised.


See also:

Try Clarive now. Get your custom cloud instance for free.


We’re pleased to present our latest Clarive SE release 7.0.9.

This release contains a number of fixes and improvements from 7.0.8.

New settings menu

The product team is constantly working on further improving the Clarive UI, with a special focus on user friendliness to new users. This has also been the case in this release.

First of all, we improved the Config option in the Admin menu. We improved the interface and renamed it into Settings, see below:

 

In this new design we divided general configuration into sections, allowing users to see all options more clearly and intuitively.

Rulebook shell multiline

Rulebooks are one of the most unique features in Clarive so we are further improving it to make them even more flexible and usable. This release adds multiline capability in the shellcommand, so users are now able to write rules like:

do:
- shell: |
ls -lart
echo hello > hi.txt
cat hi.txt

Or simpler:

do:
- |
ls -lart
echo hello > hi.txt
cat hi.txt

Change username and email in Preferences

Prior to this new release, if a user wanted to change username or email address, the tool Administrator needed to be contacted as only this person could do it.

We added in our latest release under user preferences two new fields allowing users to change their username and
email:

Move artifacts menu from Tools to Deploy

Ease of use comes with predictability. We want to achieve this without losing the essential capabilities of the product.That’s why the product team has been reviewing menu consistency with the focus of making it more intuitive and clear where options are to be found in the lefthand side menu.

As s result, we have removed the Tools option in left panel and added Artifacts in Deploy menu.

Improvements and issues resolved in this release

  • [ENH] – Improvements in rulebooks documentation
  • [ENH] – Ext-to-React panel now supports ES6
  • [ENH] – New Dockerfile for supporting both adduser and useradd containers
  • [ENH] – Custom antd components
  • [ENH] – Rename clone URL button
  • [ENH] – Admin user has group permissions on initial load
  • [FIX] – Force Clarive migrations in default config file
  • [FIX] – Admin user email is saved in cla setup
  • [FIX] – Jail dump file op in rulebooks
  • [FIX] – Wider revision column in revision fields
  • [FIX] – “Ship a file remotely” op show confirm window before overwrite existing file
  • [FIX] – Update number of rows in topic grid when topics are deleted
  • [FIX] – Fix position of Gauge dash let

Try Clarive now. Get your custom cloud instance for free.