Check out how to get started with a complete Lambda delivery lifecycle in this blog post


Today we’ll take a look at how to deploy a Lambda function to AWS with Clarive 7.1.

In this example there is also some interesting ideas that you can implement in your .clarive.yml files to manage your application deployments, such as variables that can be parsed by Clarive.

Setup

Add the following items to your Clarive instance:

  • A Slack Incoming Webhook pointing to your Slack account defined webhook URL (check https://api.slack.com/incoming-webhooks)

  • 2 variables with your aws credentials : aws_key (type: text) and aws_secret (type: secret)

Slack is actually not mandatory to run this example, so you can just skip it. You can also just hardcode variables into the .clarive.yml file but you would be missing some of Clarive’s nicest features: variable management 😉

Create your 2 AWS variables in Clarive

Head over to the Admin Variables menu to setup the aws_key and aws_secret variables:

serverless aws clarive lambda variables

Setup your AWS Lambda credentials with Clarive variables

As the variable type field indicates, secret variables are encrypted into the Clarive database.

serverless clarive aws lambda secret var aws_secret

Create a secret variable to store your AWS credentials

.clarive directory contents

As you can see in the following .clarive.yml file, we’ll be using a rulebook operation to parse the contents of a file:

  - aws_vars = parse:
      file: "{{ ctx.job('project') }}/{{ ctx.job('repository') }}/.clarive/vars.yml"

In this case we’ll load a file called vars.yml from the .clarive directory in your repository and the variables will be available in the aws_vars structure for later use, i.e. {{ aws_vars.region }}.

If you have a look at that directory, there is one vars.yml for each environment. Clarive will use the correct file depending on the target environment of the deployment job.

The .clarive.yml file

The .clarive.yml file in your project’s repository is used to define the pipeline rule that will execute during CI/CD, building and deploying your Lambda function to AWS.

This pipeline rule will:

  • replace variables in the Serverless repository files with contents stored in Clarive

  • run the serverless command to build and deploy your Lambda function

  • notify users in a Slack channel with info from the version and branch being built/deployed

Slack plugin operation in use

For posting updates of our rule execution to your Slack chat, we’ll use the slack_post operation available in our slack plugin here. With Clarive’s templating features we’ll be able to generate a more self-descriptive Slack message (also called a payload):

  - text =: |
      Version:
         {{ ctx.job('change_version') }}
      Branch:
         {{ ctx.job('branch') }}
      User:
         {{ ctx.job('user') }}
      Items modified:
         {{ ctx.job('items').map(function(item){ return '- (' + `${item.status}` + ') ' + `${item.item}`}).join('\n') }}
  - slack_post:
      webhook: SlackIncomingWebhook-1
      payload:
         attachments:
           - title: "Starting deployment {{ ctx.job('name') }} for project {{ ctx.job('project') }}"
             text: "{{ text }}"
             mrkdwn_in: ["text"]

You can play around with it to experiment with different formats and adding or removing contents to the payload at will.

Replacing variables in your source code

We use the sed operation in the build step:

  - sed [Replace variables]:
      path: "{{ ctx.job('project') }}/{{ ctx.job('repository') }}"
      excludes:
        - \.clarive
        - \.git
        - \.serverless

This will parse all files in the specified path: and replace all {{}} and ${} variables found. You can find a couple of examples in the handler.js file in the repository.

Docker image

Our rule uses an image from https://hub.docker.com that has the Serverless framework already installed:

  - image:
      name: laardee/serverless
      environment:
         AWS_ACCESS_KEY_ID: "{{ ctx.var('aws_key') }}"
         AWS_SECRET_ACCESS_KEY: "{{ ctx.var('aws_secret') }}"

See that we set the environment variables needed for the Serverless commands to point to the correct AWS account.

Operation decorators

Some of the operations you can find in the sample .clarive.yml file are using decorators such as: [Test deployed application].

Decorators are used in the job log inside Clarive instead of the name of the operation. It makes it easier for the user to read the job log if operations have textual information describing what it is doing. This is specially true with longer pipelines and complex rules.

clarive serverless pipeline message decorator

Clarive job log message decorator in action

You can actually also use variables in decorators to make them more intuitive for the user!

The full .clarive.yml file is available on our Github instance:

https://github.com/clarive/example-app-serverless/blob/master/.clarive.yml

Building and deploying your Serverless app

  • First, use this Github repository cloned into a new or an existing project (i.e project: serverless, repository: serverless)

Clone the new clarive repository from your git client:

git clone http[s]://<your_clarive_instance_URL>/git/serverless/serverless

Create a new topic branch. Here we’ll be tying our branch to a User Story in Clarive:

cd serverless
git branch -b story/to_github

Now commit and push some changes to the remote repository and go to Clarive monitor. It should have created a new user story topic for you and automatically launched the CI build:

serverless clarive build deploy ci cd job

Serverless CI/CD job with Clarive

From here forward, you can start the build and deploy lifecycle, including deploying to other environments (ie. Production or QA) and other deployment workflow. Just setup different variable values for each environment, so that the CI/CD pipeline will deploy to the corresponding environment when the time comes.

Enjoy!!!


Get an early start and try Clarive now. Get your custom cloud instance for free.



To conclude this blog series, let me share some criteria to evaluate different automation solution in the application delivery context. These criteria can help you in the selection process for a good delivery automation solution. 


Logic Layering
How is the automation logic layered out?
Coupling
Are the flow components tightly or loosely coupled?

Runs Backwards
If rolling back changes are needed, is a reverse flow natural or awkward?
Reusable Components
Can components and parts of the logic be easily reused or plug-and-played from one process to the next?

Entry Barrier
How hard is it to translate the real world into the underlying technology?
Easy to Implement
How hard is it to adapt to new applications and processes? What about maintenance?

Environment and Logic Separation
How independent is the logic from the environment?
Model Transition
Can it handle the evolution from one model to the other?

Massive Parallel Execution
Does the paradigm allow for splitting the automated execution into correlated parts that can run in parallel and results be joined later?
Generates Model as a Result
Does the automation know what is being changed and store the result configuration back into the database?

Handles Model Transitions
Can the system assist in evolving from one environment configuration to another?
Testable and Provable
Can the automation be validated, measured and tested using a dry-run environment and be proven correct?

Criteria Process-Driven Model-Driven Rule-Driven
Logic Layering Flowchart Model, Flowchart Decision Trees
Coupling Tight Loose Decoupled
Easy to Debug    
Runs Backwards (Rollback mode)    
Understands the underlying environment  
Understands component dependencies  
Reusable Components  
Entry Barrier Medium High Low
Easy to Migrate ✪✪✪ ✪✪✪✪
Easy to Maintain ✪✪✪ ✪✪✪✪
Environment and Logic separation  
Requires Environment Blueprints    
Handles Model Transitions    
Massive Parallel Execution (parallel by branching only) (limited by model components)
Performance ✪✪✪ ✪✪✪✪✪

Final notes

When automating complex application delivery processes, large organizations need to choose a system that is both powerful and maintainable. Once complexity is introduced, ARA systems often become cumbersome to maintain, slow to evolve and practically impossible to migrate out.

Process enterprise systems excel at automating business processes (as in BPM tools), because they do not inherently understand the underlying environment. But in application delivery and release automation in general, understanding the environment is key for component reuse and dependency management. Processs are difficult to adapt and break frequently.

Model-driven systems have a higher implementation ramp-up time since they require blueprinting of the environment before starting. Blueprinting the environment means also duplicating container metadata and other configuration management and software-defined infrastructure tools. The actions executed in model-based systems are not transparent, tend to be fragmented and require outside scripting. Finally, many release automation steps simply cannot be modeled that easy.

Rule-driven systems have a low entry barrier and are simple to maintain and extend. Automation steps are decoupled and consistent, testable and reusable. Rules can run massively in parallel, scaling well to demanding delivery pipelines. The rule-action logic is also the basis of machine-learning and many of the AI practices permeating IT nowadays.

In short, here are the key takeaways when deciding what would be the best approach to automating the delivery of application and service changes:

PROCESS
MODEL
RULE

✓ Easy to introduce
✓ Easy to model
✓ Simple to get started

✓ Hard to change
✓ Complex to orchestrate
✓ Highly reusable

✓ Not environment-aware
✓ High entry barrier
✓ Decoupled, easy to change and replace

✓ Error prone
✓ Duplication of blueprints
✓ Massively scalable

✓ Complex to navigate and grasp
✓ Leads to fragmented logic and scripting
✓ Models the environment as a result

 
✓ Not everything can or needs to be modeled
✓ Fits many use cases

Rule-driven automation is therefore highly recommended for implementing application and service delivery, environment provisioning and orchestration of tools and processes in continuous delivery pipelines. In fact, a whole new generation of tools in many domains now relies on rule-driven automation, such as:
– Run-book automation
– Auto-remediation
– Incident management
– Data-driven marketing automation
– Cloud orchestration
– Manufacturing automation and IoT orchestration
– And many more…

Release management encompasses a complex set of steps, activities, integrations and conditionals. So which paradigm should drive release management? Processs can become potentially unmanageable and detached from the environment. Models are too tied to the environment and end up requiring scripting to be able to deliver changes in the correct order.

Only rule-driven systems can deliver quick wins that perform to scale and are easy to adapt to fast-changing environments.


Get an early start and try Clarive now. Get your custom cloud instance for free.



It is remarkable how much ITIL bashing I have heard and read about since its 2011 revision was released a few years ago. 


Transforming into the digital world and with practices such as DevOps, Continuous Delivery, and Value stream mapping, many question if ITIL is still relevant today?

Of course it is!! Let me try to explain this in some detail and share my top 3 reasons why ITIL will also remain relevant in 2018 (and likely beyond as well)

Reality in the digital age is the ever-increasing customer expectation that digital and mobile services do what they need, but also that they will always be there, wherever and whenever they are needed. This impacts Dev as well as Ops.

As a result, companies are searching for and creating new innovative services for consumers, for industry and government. At the same time organizations are also continuously working on improving the structure and process for making sure that incidents, problems, service requests, and service changes are handled in the most efficient and effective way possible so that user experience and expectations are met continuously and fast. In the digital world expectation is to up 24/7.

Let’s explore this a step deeper.

IT is required and desires to deliver value to its internal or external customers (and wants to do this as fast as acceptable by them). Since ITIL v3, the value of an IT service has been defined as a combination of Utility and Warranty as the service progresses throughout its lifecycle.

Utility on the one hand is defined as the functionality offered by a product, application, or service to meet a particular need. Utility is often summarized as “what it does” or “its level of being fit for purpose”.

Warranty on the other hand provides a promise or guarantee that a product, application or service will meet its agreed requirements (“how it is done”, “its level of being fit for Use”). In digital-age wording ensuring digital and mobile services will always be there, wherever and whenever they are needed.

I read another interesting article a while ago that stated that Dev only produces 20% of the value that a service creates for its internal or external customers. That 20% is the actual functionality, or what the application does. This is the utility of the service, application, or product as explained above. The other 80% of the value of the service is created by Ops, ensures the service will be usable according to the customer’s needs, and will continue to be usable throughout its entire lifecycle. This is what ITIL calls warranty of the service.

Warranty includes availability, capacity, continuity and security of the service that must be implemented and maintained long after the deployment is finished and Dev moves on to their next project, or sprint.

So in the end, Ops has accountability for close to 80% of the actual value of the service for internal or external customers. That’s a lot!

Looking at DevOps, being a cultural and professional movement focusing on better communication, collaboration, and trust between Dev and Ops to ensure a balance between responsiveness to dynamic business requirements and stability, it looks more than natural that it is more Dev and must earn trust of Ops in this setting. If accountability is spread 80%-20%, then it is normal to me that the one that takes the highest risk, seeks the most trustworthy partner. Ops will seek stability and predictability to deliver the required warranty. To establish trust between Dev and Ops the handover between the two needs to be “trustworthy”. The way to establish this includes:

  • more transparency and accuracy in release and coding progress
  • more automation within the delivery process (the more manual activities in the delivery process, the lower the level of trust will be)
  • mutual understanding and respect of each other’s needs and expectations to be successful

Therefore, Lean IT and Value Stream Mapping, practices like Continuous Delivery and Continuous Deployment, all become a subset or a building block within a DevOps initiative.  DevOps is often an organic approach toward automating process/workflow and getting products to market more efficiently and with quality.

Often in bigger enterprises, applications or services tend to be highly interconnected. There is a desire to have a better decoupling and use of micro services, but for many this will take another decade or even longer to ultimately there (if at all). Dev teams often work and focus on individual applications or services, but in reality, these applications often interact with others within the production environment. Ops has the accountability to ensure services and applications remain available and functional at all times with quality.

This often means finding a workaround quickly at the front line so customers can continue working, assessing the overall impact of a change in production holistically, identifying failure root cause, etc. This all aligns nicely with what ITIL has been designed for: Best practices for managing, supporting, and delivering IT Services. There is no way the need for such practices will fade or become irrelevant in the near future, especially not in larger enterprises. On the contrary, with the introduction of new platforms (like public or private cloud, containers, IoT, or virtual machines, etc) we will see an increasing number of silos and teams, because often Dev team center around specific platforms.

Their deliverables form the micro services and applications of tomorrow, spread over multiple platforms. Ops need to ensure these services are of quality and delivering value to all customers. This requires discipline, communication, collaboration, tracking, planning and learning across all silos/teams…. ITIL still remain the best reference point for establishing such practices.

Big companies often with legacy in coding will only remain successful in the digital age if they find a good blend that fits for them between Agile, DevOps, ITIL and Lean IT. I am mentioning only these explicitly because they benefit a great momentum at present, but in fact, companies should explore best practices available and find the best blend that works effectively and efficiently for them, and ensure buy in from those affected.

This last aspect is key: teams need to build a common understanding of how DevOps is enabled by Agile, ITIL/ITSM, Lean and maybe other best practices.  It is not just about a tool, automation or continuous delivery but how we go about doing this that is key.  You need to promote, inspire and educate teams on how these practices can be used together to enable them and the company for success. 

To finish let me share my 3 reasons why ITIL remains valid into 2018:

1) ITIL remains providing a stable foundation/reference point in the evolving enterprise

Flexibility, elasticity and scalability remain key attributes of contemporary IT departments. Creating and maintaining this level of agility relies on having clear processes, a clear and accurate understanding of the current IT configuration and of course a good service design. The core principles of ITIL have been refined to help organizations establish these attributes within their technology systems, ensuring that there is a steady foundation for IT operations. Having this stable environment makes it easier to adjust the service management setup without running into any problems.

2) ITIL provides the required stability and value warranty within evolving enterprises

Businesses face more pressure than ever to maintain constant uptime around the clock, and all innovations in the world are useless if businesses are losing productivity because of system availability issues. ITIL continues to provide the reliability and stability needed to maximize the value of new technology strategies in today’s digital world. While organizations are in their digital transformation journey, they will have to support multi-speed, multi-risk, multi-platform environments and architectures. ITIL, under regular evolution and updating itself, continues to provide proven, common sense best practices to deliver stability in evolving, heterogeneous environments.

3) ITIL remains the de-facto reference set of best practices for IT service management (ITSM) that focuses on aligning IT services with the needs of customers

If you pick and choose, adopt and adapt what you find in ITIL you will learn that a lot of the content is “common sense”. Common sense will never go out of fashion.

Just be aware and accept that the need of and value to a customer goes beyond just the delivery of (isolated) functionality into a production environment.


Get an early start and try Clarive now. Get your custom cloud instance for free.



This video shows how Clarive Enterprise Edition (EE) can deploy War files to Tomcat, .Net applications to Windows servers and Mainframe Endevor packages with a Single Pipeline.


In the early stages, deployment is done for each technology separate(WAR, .NET, Mainframe) but for the production deployment, all 3 technologies are deployed to 3 different platforms with a single job.
All deployments are done with a SINGLE pipeline rule.

Watch the video here

This way consistency during deployment is guaranteed.

3 Technologies(Java, .Net, Cobol programs in Endevor packages, 3 platforms (Tomcat on Linux, MS Windows server and mainframe), 3 environments(DEV, QA, PROD) deployed with jobs generated by a SINGLE pipeline rule.

The Pipeline rule can be extended with other technologies(Siebel, Sap, mobile apps), addtional environments(User acceptance, PreProd) and addtional platforms(Ios, Android)

With this Clarive EE is offering and end to end view on the release process, with extended dashboarding capabilities and easy tool navigation…

Get the full insight here


Get an early start and try Clarive now. Get your custom cloud instance for free.



A third and final way to automate delivery I will discuss is rule-driven automation.


Rule-driven automation ties together event triggers and actions as the environment evolves from state A to state B when changes are introduced.

Rules understand what changes are being delivered when, where (the environment) and how.

Rules are driven by events and behavior and are fully transparent. Rules are also behind the simplest and most effective tools employed by users of all levels, from the popular IFTTT to MS Outlook, for automating anything from simple tasks to complex process. Why? Because rules are both easy to implement and to understand.

Let’s use again the analogy of software development to make the rule-driven concept clear. It reminds me of my university time when I was working with rule-based systems. At that time, we made the distinction between procedural and logical knowledge. Let me recap and explain both quickly.

Procedural knowledge is knowledge about how to perform some task. Examples are how to provision an environment, how to build an application, how to process an order, how to search the Web, etc. Given their architectural design, computers have always been well-suited to store and execute procedures. As discussed before, most early-day programming languages make it easy to encode and execute procedural knowledge, as they have evolved naturally from their associated computational component (computer). Procedural knowledge appears in a computer as sequences of statements in programming languages.

Logical knowledge on the other hand is the knowledge of “relationships” between entities. It can relate a product and its components, symptoms and a diagnosis, or relationships between various tasks for example. This sounds familiar looking at application delivery and dependencies between components, relationships between applications, release dependencies etc.

Unlike for factual and procedural knowledge, there is no core architectural component within a traditional computer that is well suited to store and use such logical knowledge. Looking in more detail, there are many independent chunks of logical knowledge that are too complex to store easily into a database, and they often lack an implied order of execution. This makes this kind of knowledge ill-suited for straight programming. Logical knowledge seems difficult to encode and maintain using the conventional database and programming tools that have evolved from underlying computer architectures.

This is why rule-driven development, expert system shells, rule-based systems using rule engines became popular. Such a system was a kind of virtual environment within a computer that would infer new knowledge based on known factual data and IF-THEN rules, decision trees or other forms of logical knowledge that could be defined.

It is clear that building, provisioning, and deploying applications or deploying a release with release dependencies assume a tremendous amount of logical knowledge. This is exactly the reason why deployment is often seen as complex. We want to define a procedural script for something that has too many logical, non-procedural knowledge elements.

For this reason, I believe that rule-driven automation for release automation and deployment has a lot of potential.
In a rule-driven automation system, matching rules react to the state of the system. The model is a result of how the system reconfigures itself:

Rule-driven automation is based on decision trees that are very easy to grasp and model, because they:

  • Are Simple to understand and interpret.” People are able to understand event triggers and rules after a brief explanation. Rule decision trees can also be displayed graphically in a way that is easy for non-experts to interpret.
  • Require little data preparation.” A model-based approach requires normalization into a model. Behaviors however can be easily turned into a rule decision tree without much effort. IF a THEN b.
  • Support full Decoupling.” With the adoption of service-oriented architectures, automation must be decoupled so that it is easy to replace, adapt and scale.
  • Are Auto scalable, replaceable and reliable.” Decoupled logic can scale and are safer to replace and continuously improve and deploy.
  • Are Robust.” Resists failure even if its assumptions are somewhat violated by variations in the environment.
  • Perform well in large or complex environments.” A great amount of decisions can be executed using standard computing resources in reasonable time.
  • Mirror human decision making more closely than other approaches.” This is useful when modeling human decisions/behavior and makes it suitable for applying machine learning algorithms.

The main features of rule-driven automation include:

  • Rules model the world using basic control logic: IF this THEN that. For every rule there is an associated action. Actions can be looped and further broken down into conditions.
    Rules are loosely coupled and therefore can execute in parallel and en masse without the need to create orchestration logic.
  • Rules are templates and can be reused extensively.
  • Rules can be chained and concurrency controlled.
  • Rules handle complex delivery use cases including decision and transformation.
  • The model is a result of how rules interact with the environment. Models and blueprints can also be used as input, but are not a requirement.

Get an early start and try Clarive now. Get your custom cloud instance for free.



It is incredible to see how much (and increased) attention DevOps is getting in organizations today.


As DevOps tool vendor, we welcome this of course, but at the same time it also confirms that successfully implementing DevOps within organizations is not as simple as it sometimes led to believe. The obvious question is then of course: Why?

In essence DevOps is about improved collaboration between Dev and Ops and automation of all delivery processes (made as lean as possible) for quality delivery at the speed of business. If you want to learn more about DevOps and how to implement it in bigger enterprises, take a look at the 7 Step Strategy ebook on our website.

End-to-end delivery processes can be grouped into 3 simple words: Code, Track, and Deploy.

  • Code” represents those delivery tasks and processes closely aligned with the Dev-side
  • Deploy” represents the delivery tasks and processes closely aligned with the Ops-side of the delivery chain
  • Track” is what enables improvement and better collaboration: tracking progress within the entire delivery chain. To make delivery processes lean, accurate, real-time, and factual process data is required to analyze, learn and improve.

Thinking about the delivery toolchain

Many colleagues have written about the cultural aspects of DevOps and its related challenges to implement DevOps within the organization. I concur their statements and am generally in agreement with their approach for a successful DevOps journey.

To change a culture and/or team behaviour though, especially when team members are spread globally, the organization needs to carefully think about its delivery toolchain. Why? Because a prerequisite for a culture to change, or even more basic simply for people to collaborate, is that people are able to share assets or information at real-time, regardless of location.

Reality in many organizations today is that people are grouped into teams and that each team has the freedom to choose their own tooling, based on platform support, past experience, or just preference. As a result, organizations very quickly find themselves in a situation where the delivery toolchain becomes a big set of disconnected tools for performing various code and deploy tasks. The lack of integration results in many manual tasks to “glue” it all together. Inevitably they all struggle with end-to-end tracking and a lot of time and energy is wasted on this. I often see this even within teams, because of the plenitude of tools they installed as delivery toolchain.

clarive application delivery

Clarive Lean Application Delivery

SIMPLICITY is the keyword here.

Funny enough this is a DevOps goal! Recall that DevOps is said to apply Lean and Agile principles to the delivery process. So why do teams allow product-based silos? Why do they look for tools to fix one particular delivery aspect only, like build, deploy, version control, test, etc.?

Instead of looking for (often open source) code or tools to automate a particular delivery aspect, a team or organization should look at the end-to-end delivery process, and __look for the simplest way to automate this, and importantly without manual activities!__

We believe that with workflow driven deployment teams can get code, track, and deploy automated the right way: simplified and integrated!

Workflow driven deployment will allow teams to:

  • Use discussion topics that make it simpler to manage and relate project activities: code branches are mapped 1:1 with their corresponding topic (Feature, User Story, Bugfix, etc.) making them true topic branches. This will provide strong coupling between workflow and CODE.
  • Track progress and automate deployment on every environment through kanban boards. Kanban Boards allow you to quickly visualize status of various types of topics in any arrangement. Within Clarive, Kanban topics can be easily grouped into lists, so that you can split your project in many ways. Drop kanban cards on a board simply into an environment to trigger a deployment. Simple and fully automated! This will provide strong coupling between workflow and DEPLOY automation onto every environment.
clarive kanban

The Clarive Kanban makes tracking progress easier

  • Analyze and monitor status, progress and timing within the delivery process. It even makes it possible to perform pipeline profiling. Profiling allows you to spot bottlenecks and will help you to optimize pipelines and overall workflow using execution profiling data. All data is factual and real-time! This will provide you with ultimate TRACK information within the delivery process.

The Right way to go

Why is workflow driven deployment the right way to go? Because it breaths the true objectives of DevOps: better and seamless collaboration between Dev and Ops with automation everywhere possible at the toolchain level, not only at the process/human level. This makes a big difference. I believe that a lot of companies continue to struggle with DevOps simply because they are shifting their collaboration and automation issues from their current processes and tools to a disconnected DevOps toolchain exposing similar and new problems. As a result, they become skeptical about the DevOps initiative …. And blame the toolchain for it!! (always easier than blaming people)

A DevOps platform that enables true workflow driven deployment blends process automation with execution automation and has the ability to analyze and track automation performance from start till end. This is what you should look for to enable DevOps faster: A simplified, integrated toolchain that gets the job done with transparency, so organizations can concentrate on the cultural and people related aspects.


Try Clarive now. Get your custom cloud instance for free.



Here our second post from our series on DevOps automation models. Model-driven delivery automation is based on predefined blueprints of the environments.


Blueprints model what needs to exist and where. Logic is then attached to the distinct components in the blueprint by defining how each component can be created, updated or decommissioned.

You see model-driven deployment as the preferred method used by many of recent Application Release Automation (ARA) and also some Configuration Management tools. The approach became popular in the early 2000s when Software Defined Data Centers and Virtual Machines became the new norm.

Models came as a welcome improvement over process systems. Using Model-driven systems, you need to understand and represent the environmental model first, then define how to deliver changes to it. By understanding the environment, model logic becomes segmented, and therefore reusable.

A higher abstraction

Let’s make another analogy to software development to make the model-driven concept clear. Around the early 2000s also Model-driven Development (MDD) became popular as alternative to traditional development. There are two core concepts associated with model driven development: abstraction and automation.

In MDD, the software application model is defined on a higher abstraction level and then converted into a working application using automated transformation or interpretations. The right model driven development approach leverages model execution at run time, where the model is automatically transformed into a working software application by interpreting and executing the model (removing the need to generate or write code). This means that executable code is “generated” automatically based on the model and transformation of this model based on the specific lower-level environmental settings. The higher the level of abstraction, the more likely reuse becomes possible. For this reason, a model driven development platform is often referred to as a high-productivity platform given the unprecedented speed at which developers can build and deploy new applications. This speed is derived from the use of models and other pre-built components that business and technical teams use to visually construct applications.

The approach described above can easily be mapped to application delivery as follows: Those responsible for environments, application components, and deployment processes are all able to work together, but they define and manage their own delivery related aspects separately. As such there is a clear separation of duties and abstraction is made for each area of focus:

  • Application model (e.g. application server, web server, database server, WAR, SQL, GEM, service bus, etc.)
  • Environmental model (e.g. public cloud (public, private), container platform, ERP packages, VM, OS, storage, network, security, etc.)
  • Process model (e.g. installation order, variable settings, dependencies, etc.)

The first step in a model-driven approach is therefore to define a blueprint of the environment:

Model-driven delivery automation is often split into three or more layers. Each layer is orchestrated/modeled separately, for example:

The main features of model-driven automation include:

  • Models the environment components and their relationships.

  • Higher level of abstraction.

  • Not tightly coupled.

  • Components are reusable.

  • Component orchestration process is detached.

model driven automation

Model driven automation requires complex orchestration to get even the simplest of things done

Pitfalls

Today, model-driven automation is important in and integral part of most configuration management tools and enterprise infrastructure automation software. Moreover, container technology such as Docker has models built into its metadata.

But this creates a challenge: in the realm of continuous delivery, modeling has become synonymous with duplication. In a world where containerized microservices and infrastructure-as-code has attained widespread adoption, the model is already embedded in the application or service being delivered. What is the point of having to implement another copy of it in your ARA tool?

On top of that, in hybrid platform environments, models are also hard to get started with. They require describing complex enterprise architectures and application relationships in a graphical view. It would be great to see the model of an environment that spans ERP packages such as SAP or Siebel or Salesforce, in combination with some mobile and mainframe LPARS…

Finally, previous work, such as scripts or processes, are harder to adapt to models, since we saw that scripts combine actions with correlated steps. So model-driven systems are tougher to migrate to if you want to retain investments already done.

In the next blog session, we will take closer look at rule-driven automation, a final alternative way to handle deployments which provides a solution to some of the challenges raised.


See also:

Try Clarive now. Get your custom cloud instance for free.



This post is the first in a series of 4 covering different types of automation methods.


Organizations that want to deliver application, service and environment changes in a quick, consistent, and safe manner and with a high level of quality, invariably need a good and flexible automation system.

Choosing the correct automation method from the start can make the transition to continuous or automated delivery a lot easier.

Current tools for delivering applications in the DevOps and ARA space usually fit into one of these three automation paradigms:

  1. Process or script driven
  2. Model driven
  3. Rule driven

In the past, and in many realms of automation systems, process/script-driven approaches have prevailed. More recently, especially since the inception of the cloud and software-defined infrastructure, model-driven automation has become increasingly popular.

However, as I will explain in this blog series, both process and model-driven automation have serious drawbacks. These drawbacks tend to result in a considerable amount of rigid processes that are expensive in terms of maintenance and evolution, or simply very hard to introduce in the first place. In addition, delivery quality can be seriously impacted.

This blog post is the first of 4 that will elaborate on each paradigm individually, discussing their approach, features, and drawbacks.

Process or script-driven automation

The first paradigm I want to discuss is script or process-driven automation. It is a method based on a clearly defined, start-to-end process flow that defines how change is delivered to destination environments, typically a series of procedural steps executed to deploy the entire application.

But consider this: such an approach can become painful due to the trend that requires today’s developers to build scalable applications and apply a strategy based on hybrid clouds and platforms to attain flexibility, continuity at the lowest possible cost. With multiple platforms in mind, a process/script based solution means you need a unique process for each platform, cloud, application, and/or environment.

Since scripts are unique to each combination of process, environment, and app, they are called tightly coupled. As a result, deployment processes may need to be rewritten several times throughout the lifecycle of the application — any time the application, middleware, or components change.

Another weakness of the process-driven approach is that it does not offer the opportunity to align teams around the same toolset. In fact, it encourages individual teams to maintain their own catalogue of custom-built scripts, in their technologies of choice, and prevents the rest of the organization from benefiting from their investments. This is the worst possible way to implement DevOps practices and instate lean principles within application delivery as it nurtures silos and obstructs collaboration and sharing – while often making the delivery toolchain unnecessarily complex.

Coding: an analogy

Let’s make an analogy within software development itself to make this clearer.

Process or script driven deployment is in a way analogous to coding using a particular programming language or IDE. Since each development team writes code in a different language, it does not offer the opportunity to align teams around the same code set or make it easy to share code across teams.

Although in recent times programming languages have become much more platform neutral, recall that in the old days, languages were very bound to platforms and machines as well as they included specific coding instruction sets closely supporting the platform or machine they supported. This also resulted in issues with sharing or migration of code across platforms. This is very similar to the issues I see with scripts/processes for deployment.

So, in summary, a process is defined as a series of steps that contain information about the environment, the changes being introduced and the deployment logic to implement the change process for the affected platform(s).

Process-driven automation

The main features of process-driven automation include:

  • Directly represents how a sequence of execution steps need to be performed.
  • Represents both human actions and machine integration processes.
  • A process represents how the environment changes, but does not understand the environment layout.
  • They are tightly coupled.
  • Logical steps contain hard-coded information about settings, context, and environment, that make it difficult to abstract.

Today many processes are implemented using software-based flowchart diagrams like the one below:

Flowcharts are very well known from computer science textbooks. They were initially meant to represent control/sequencing logic at a conceptual stage. Its original use was not intended for execution or automation, certainly not in the context of day-to-day automation of the complex delivery logic that can be found in most IT departments within decent sized organizations.

Especially sequencing delivery logic drives delivery complexity. Why? Because the sequence “depends” on many different events that occur during the process. Representing all options up-front can be challenging. Often as a result, the process charts become unwieldy to read and oversee when the complexity of the delivery process is high, which is often the case when there are a lot of application interdependencies.

Simple but risky

To conclude, process or script driven automation is often perceived be simple, especially if the automation tool is very closely aligned with the platforms it supports (same as with a programming language in the analogy I used) and if delivery complexity is low. The approach is very well appreciated by developers because it gives them a solution and the power very similar to what they experience within their development environments: autonomy and control being the most important ones.

The biggest challenge I repetitively hear about is its tight coupling, resulting in many different “clones” of the same script/process being used for deployment onto different environment or for different applications etc. Unless there is a strong governance process supporting the change and use of such processes, the risk can be very high that what was put into production is not entirely the same as what was tested in UAT or QA… Not sure business users and product owners will appreciate that thought.

In my next blog session, I will take closer look at model-driven automation, a more recent alternative to handle deployments with more opportunity for reuse and far less coupling.


Read next:

Try Clarive now. Get your custom cloud instance for free.