A third and final way to automate delivery I will discuss is rule-driven automation.


Rule-driven automation ties together event triggers and actions as the environment evolves from state A to state B when changes are introduced.

Rules understand what changes are being delivered when, where (the environment) and how.

Rules are driven by events and behavior and are fully transparent. Rules are also behind the simplest and most effective tools employed by users of all levels, from the popular IFTTT to MS Outlook, for automating anything from simple tasks to complex process. Why? Because rules are both easy to implement and to understand.

Let’s use again the analogy of software development to make the rule-driven concept clear. It reminds me of my university time when I was working with rule-based systems. At that time, we made the distinction between procedural and logical knowledge. Let me recap and explain both quickly.

Procedural knowledge is knowledge about how to perform some task. Examples are how to provision an environment, how to build an application, how to process an order, how to search the Web, etc. Given their architectural design, computers have always been well-suited to store and execute procedures. As discussed before, most early-day programming languages make it easy to encode and execute procedural knowledge, as they have evolved naturally from their associated computational component (computer). Procedural knowledge appears in a computer as sequences of statements in programming languages.

Logical knowledge on the other hand is the knowledge of “relationships” between entities. It can relate a product and its components, symptoms and a diagnosis, or relationships between various tasks for example. This sounds familiar looking at application delivery and dependencies between components, relationships between applications, release dependencies etc.

Unlike for factual and procedural knowledge, there is no core architectural component within a traditional computer that is well suited to store and use such logical knowledge. Looking in more detail, there are many independent chunks of logical knowledge that are too complex to store easily into a database, and they often lack an implied order of execution. This makes this kind of knowledge ill-suited for straight programming. Logical knowledge seems difficult to encode and maintain using the conventional database and programming tools that have evolved from underlying computer architectures.

This is why rule-driven development, expert system shells, rule-based systems using rule engines became popular. Such a system was a kind of virtual environment within a computer that would infer new knowledge based on known factual data and IF-THEN rules, decision trees or other forms of logical knowledge that could be defined.

It is clear that building, provisioning, and deploying applications or deploying a release with release dependencies assume a tremendous amount of logical knowledge. This is exactly the reason why deployment is often seen as complex. We want to define a procedural script for something that has too many logical, non-procedural knowledge elements.

For this reason, I believe that rule-driven automation for release automation and deployment has a lot of potential.
In a rule-driven automation system, matching rules react to the state of the system. The model is a result of how the system reconfigures itself:

Rule-driven automation is based on decision trees that are very easy to grasp and model, because they:

  • Are Simple to understand and interpret.” People are able to understand event triggers and rules after a brief explanation. Rule decision trees can also be displayed graphically in a way that is easy for non-experts to interpret.
  • Require little data preparation.” A model-based approach requires normalization into a model. Behaviors however can be easily turned into a rule decision tree without much effort. IF a THEN b.
  • Support full Decoupling.” With the adoption of service-oriented architectures, automation must be decoupled so that it is easy to replace, adapt and scale.
  • Are Auto scalable, replaceable and reliable.” Decoupled logic can scale and are safer to replace and continuously improve and deploy.
  • Are Robust.” Resists failure even if its assumptions are somewhat violated by variations in the environment.
  • Perform well in large or complex environments.” A great amount of decisions can be executed using standard computing resources in reasonable time.
  • Mirror human decision making more closely than other approaches.” This is useful when modeling human decisions/behavior and makes it suitable for applying machine learning algorithms.

The main features of rule-driven automation include:

  • Rules model the world using basic control logic: IF this THEN that. For every rule there is an associated action. Actions can be looped and further broken down into conditions.
    Rules are loosely coupled and therefore can execute in parallel and en masse without the need to create orchestration logic.
  • Rules are templates and can be reused extensively.
  • Rules can be chained and concurrency controlled.
  • Rules handle complex delivery use cases including decision and transformation.
  • The model is a result of how rules interact with the environment. Models and blueprints can also be used as input, but are not a requirement.

Get an early start and try Clarive now. Get your custom cloud instance for free.



Here our second post from our series on DevOps automation models. Model-driven delivery automation is based on predefined blueprints of the environments.


Blueprints model what needs to exist and where. Logic is then attached to the distinct components in the blueprint by defining how each component can be created, updated or decommissioned.

You see model-driven deployment as the preferred method used by many of recent Application Release Automation (ARA) and also some Configuration Management tools. The approach became popular in the early 2000s when Software Defined Data Centers and Virtual Machines became the new norm.

Models came as a welcome improvement over process systems. Using Model-driven systems, you need to understand and represent the environmental model first, then define how to deliver changes to it. By understanding the environment, model logic becomes segmented, and therefore reusable.

A higher abstraction

Let’s make another analogy to software development to make the model-driven concept clear. Around the early 2000s also Model-driven Development (MDD) became popular as alternative to traditional development. There are two core concepts associated with model driven development: abstraction and automation.

In MDD, the software application model is defined on a higher abstraction level and then converted into a working application using automated transformation or interpretations. The right model driven development approach leverages model execution at run time, where the model is automatically transformed into a working software application by interpreting and executing the model (removing the need to generate or write code). This means that executable code is “generated” automatically based on the model and transformation of this model based on the specific lower-level environmental settings. The higher the level of abstraction, the more likely reuse becomes possible. For this reason, a model driven development platform is often referred to as a high-productivity platform given the unprecedented speed at which developers can build and deploy new applications. This speed is derived from the use of models and other pre-built components that business and technical teams use to visually construct applications.

The approach described above can easily be mapped to application delivery as follows: Those responsible for environments, application components, and deployment processes are all able to work together, but they define and manage their own delivery related aspects separately. As such there is a clear separation of duties and abstraction is made for each area of focus:

  • Application model (e.g. application server, web server, database server, WAR, SQL, GEM, service bus, etc.)
  • Environmental model (e.g. public cloud (public, private), container platform, ERP packages, VM, OS, storage, network, security, etc.)
  • Process model (e.g. installation order, variable settings, dependencies, etc.)

The first step in a model-driven approach is therefore to define a blueprint of the environment:

Model-driven delivery automation is often split into three or more layers. Each layer is orchestrated/modeled separately, for example:

The main features of model-driven automation include:

  • Models the environment components and their relationships.

  • Higher level of abstraction.

  • Not tightly coupled.

  • Components are reusable.

  • Component orchestration process is detached.

model driven automation

Model driven automation requires complex orchestration to get even the simplest of things done

Pitfalls

Today, model-driven automation is important in and integral part of most configuration management tools and enterprise infrastructure automation software. Moreover, container technology such as Docker has models built into its metadata.

But this creates a challenge: in the realm of continuous delivery, modeling has become synonymous with duplication. In a world where containerized microservices and infrastructure-as-code has attained widespread adoption, the model is already embedded in the application or service being delivered. What is the point of having to implement another copy of it in your ARA tool?

On top of that, in hybrid platform environments, models are also hard to get started with. They require describing complex enterprise architectures and application relationships in a graphical view. It would be great to see the model of an environment that spans ERP packages such as SAP or Siebel or Salesforce, in combination with some mobile and mainframe LPARS…

Finally, previous work, such as scripts or processes, are harder to adapt to models, since we saw that scripts combine actions with correlated steps. So model-driven systems are tougher to migrate to if you want to retain investments already done.

In the next blog session, we will take closer look at rule-driven automation, a final alternative way to handle deployments which provides a solution to some of the challenges raised.


See also:

Try Clarive now. Get your custom cloud instance for free.


Clarive Community Cloud now available with Clarive 7


Code, track and deploy your software releases with Clarive. Get your own cloud instance right now.


We’re proud to announce the launching of our community cloud instances.

Our community cloud instance is completely free of charge and runs on an dedicated AWS instance and database, preinstalled with Clarive Standard Edition 7.0.8. The instance is limited to 25 users or nodes, which should be more than enough for most teams.

What’s Inside Clarive SE

If you are asking what’s inside the box, here’s a quick overview:

  • A git repository manager with unlimited private repositories
  • Auto topic-branch management
  • Issue tracking (we call them topics), with scrum and kanban
  • A unique kanban UI, with swimlanes and backlog management
  • A release management workflow
  • CI/CD: continuous integration and continuous deployment pipelines that run in your Clarive cloud instance
  • Deployment environment management
  • Dashboarding
  • Event rules
  • Customizable form fields

Here are a few getting started steps:

  1. Get your free instance here.
  2. Once you get the activation email, login.
  3. Create a new Project.
  4. Create a Feature topic in the project ⇒ this will create a branch
  5. git clone https://[your-instance]/git/[your project]/[your repo]
  6. Code and push.
  7. Repeat!

Yet Another Platform?

But why would you need Clarive SE over most of the solutions out there?

What we offer is a unique end-to-end DevOps solution that implements a unified topic-based DevOps workflow.

Clarive SE should interest you if:

  • You are looking for an end-to-end DevOps solution, from code to track to test and deploy.

  • You want to code your CI/CD logic into your repository (with YAML or any programming language) and not depend on separate pipeline management done by tools like Jenkins or Bamboo.

  • You want your CI/CD to run in our cloud server… then deploy to yours.

  • You always wanted to build projects that are multi-repository

  • You think Git branching and issue tracking should work seamless as one, not 2 separate entities.

Under the hood

Once on your cloud server you’ll find a complete CI/CD pipeline platform based on Docker. Every pipeline in Clarive job runs in a docker container of your choice.

Containers Everywhere

Pipeline containers runs in the Clarive cloud server, every time you request a docker container it will be downloaded from the Docker Hub and installed in the server permanently.

build:
   - image: node
   - webpack ./app.js dist/bundle.js
   - image: pybuilder
   - pyb

deploy:
   - ship:
       host: prodserver
       from: "dist/bundle.js
       to: "/opt/app/{{ env }}/"

/provision_aws:
   - echo: "instance type={{ ctx.request('params').type }}"
   - image: aws

All shell commands will run against the container. You can read more in the rulebooks documentation.

Upgrading

If you run out of users or nodes, or just want more power or storage to manage your code and run CI/CD, our team edition is the most affordable end-to-end solution in the market right now.

All our editions are available on-premise as well, in case you prefer to host your own Clarive.

For ultimate flexibility and customization you can upgrade to Clarive EE, a full-fledged Application Delivery platform that adds:

  • Visual rule designer for creating custom workflows, topic categories and complex release orchestration

  • Custom reporting and insights

  • Role-based predefined dashboard designer

  • Step-by-step pipeline debugging

  • And an assortment of change providers, including SVN, Visual Studio, SAP, Salesforce, Siebel, Mainframe and other change sources through our enterprise connectors.

The Clarive Community Cloud is currently in beta while we test the new provisioning infrastructure and machine configurations.


Try Clarive now. Get your custom cloud instance for free.