To conclude this blog series, let me share some criteria to evaluate different automation solution in the application delivery context. These criteria can help you in the selection process for a good delivery automation solution. 


Logic Layering
How is the automation logic layered out?
Coupling
Are the flow components tightly or loosely coupled?

Runs Backwards
If rolling back changes are needed, is a reverse flow natural or awkward?
Reusable Components
Can components and parts of the logic be easily reused or plug-and-played from one process to the next?

Entry Barrier
How hard is it to translate the real world into the underlying technology?
Easy to Implement
How hard is it to adapt to new applications and processes? What about maintenance?

Environment and Logic Separation
How independent is the logic from the environment?
Model Transition
Can it handle the evolution from one model to the other?

Massive Parallel Execution
Does the paradigm allow for splitting the automated execution into correlated parts that can run in parallel and results be joined later?
Generates Model as a Result
Does the automation know what is being changed and store the result configuration back into the database?

Handles Model Transitions
Can the system assist in evolving from one environment configuration to another?
Testable and Provable
Can the automation be validated, measured and tested using a dry-run environment and be proven correct?

Criteria Process-Driven Model-Driven Rule-Driven
Logic Layering Flowchart Model, Flowchart Decision Trees
Coupling Tight Loose Decoupled
Easy to Debug    
Runs Backwards (Rollback mode)    
Understands the underlying environment  
Understands component dependencies  
Reusable Components  
Entry Barrier Medium High Low
Easy to Migrate ✪✪✪ ✪✪✪✪
Easy to Maintain ✪✪✪ ✪✪✪✪
Environment and Logic separation  
Requires Environment Blueprints    
Handles Model Transitions    
Massive Parallel Execution (parallel by branching only) (limited by model components)
Performance ✪✪✪ ✪✪✪✪✪

Final notes

When automating complex application delivery processes, large organizations need to choose a system that is both powerful and maintainable. Once complexity is introduced, ARA systems often become cumbersome to maintain, slow to evolve and practically impossible to migrate out.

Process enterprise systems excel at automating business processes (as in BPM tools), because they do not inherently understand the underlying environment. But in application delivery and release automation in general, understanding the environment is key for component reuse and dependency management. Processs are difficult to adapt and break frequently.

Model-driven systems have a higher implementation ramp-up time since they require blueprinting of the environment before starting. Blueprinting the environment means also duplicating container metadata and other configuration management and software-defined infrastructure tools. The actions executed in model-based systems are not transparent, tend to be fragmented and require outside scripting. Finally, many release automation steps simply cannot be modeled that easy.

Rule-driven systems have a low entry barrier and are simple to maintain and extend. Automation steps are decoupled and consistent, testable and reusable. Rules can run massively in parallel, scaling well to demanding delivery pipelines. The rule-action logic is also the basis of machine-learning and many of the AI practices permeating IT nowadays.

In short, here are the key takeaways when deciding what would be the best approach to automating the delivery of application and service changes:

PROCESS
MODEL
RULE

✓ Easy to introduce
✓ Easy to model
✓ Simple to get started

✓ Hard to change
✓ Complex to orchestrate
✓ Highly reusable

✓ Not environment-aware
✓ High entry barrier
✓ Decoupled, easy to change and replace

✓ Error prone
✓ Duplication of blueprints
✓ Massively scalable

✓ Complex to navigate and grasp
✓ Leads to fragmented logic and scripting
✓ Models the environment as a result

 
✓ Not everything can or needs to be modeled
✓ Fits many use cases

Rule-driven automation is therefore highly recommended for implementing application and service delivery, environment provisioning and orchestration of tools and processes in continuous delivery pipelines. In fact, a whole new generation of tools in many domains now relies on rule-driven automation, such as:
– Run-book automation
– Auto-remediation
– Incident management
– Data-driven marketing automation
– Cloud orchestration
– Manufacturing automation and IoT orchestration
– And many more…

Release management encompasses a complex set of steps, activities, integrations and conditionals. So which paradigm should drive release management? Processs can become potentially unmanageable and detached from the environment. Models are too tied to the environment and end up requiring scripting to be able to deliver changes in the correct order.

Only rule-driven systems can deliver quick wins that perform to scale and are easy to adapt to fast-changing environments.


Get an early start and try Clarive now. Install your 30-day trial here.



We’re pleased to present our new release Clarive 7.0.11 with two new administration features. 


Rulebook Variables

Now you can keep your rulebook variables private with secret variables. The content is not visible to the user and the data is encrypted to the database.

Variables also now have multiple scopes. Define the scope in the variable and it will only be available for rulebooks that run under the given scope.

clarive variable definition


Users can only manage variables for owned projects

Docker Admin Interface

Clarive rulebooks can download, install and run shell commands within Docker images with the image: op. Starting with this release, you can manage all Docker images and containers installed and available to rulebooks on the Clarive server.

The Clarive Docker admin panel can manage both instances and containers. A complete list is shown including the current status of each one of the containers.

Using the containers list you can do over them actions like start, stop and delete. You can delete Docker images too to keep your image registry tidy and remove old images.

Improvements and issues resolved

Other small fixes and enhancements added to the release:

  • ENH – Small Kanban improvements
  • ENH – Update js libraries
  • ENH – Variable management for rulebooks
  • FIX – Topic grid filter bugs
  • FIX – Rulebook sed op fixes
  • FIX – Push to git repositories allowed with no project related to repository (Enterprise Edition Only)

Ready to upgrade?

Just follow the standard procedure for installing the new version. Click here to get it from our install page.

Cloud images have been updated automatically in case you are a user of our cloud.

Acknowledgements

Join us in our Community for suggestions and bug reports.

Thanks to everyone who participated in making this release possible.

Roadmap

The next release, 7.0.12 will come out on the first week of January 2018 with mostly bug fixes and small enhancements.

And a major release, 7.1 is coming up later in January packed with some awesome additions:

  • A brand new navigation interface, with tabless project-oriented navigation
  • A revamped Git repository interface
  • A new admin interface
  • Revamp documentation
  • And much more!

Visit our documentation to learn more about the features of Clarive.



It is remarkable how much ITIL bashing I have heard and read about since its 2011 revision was released a few years ago. 


Transforming into the digital world and with practices such as DevOps, Continuous Delivery, and Value stream mapping, many question if ITIL is still relevant today?

Of course it is!! Let me try to explain this in some detail and share my top 3 reasons why ITIL will also remain relevant in 2018 (and likely beyond as well)

Reality in the digital age is the ever-increasing customer expectation that digital and mobile services do what they need, but also that they will always be there, wherever and whenever they are needed. This impacts Dev as well as Ops.

As a result, companies are searching for and creating new innovative services for consumers, for industry and government. At the same time organizations are also continuously working on improving the structure and process for making sure that incidents, problems, service requests, and service changes are handled in the most efficient and effective way possible so that user experience and expectations are met continuously and fast. In the digital world expectation is to up 24/7.

Let’s explore this a step deeper.

IT is required and desires to deliver value to its internal or external customers (and wants to do this as fast as acceptable by them). Since ITIL v3, the value of an IT service has been defined as a combination of Utility and Warranty as the service progresses throughout its lifecycle.

Utility on the one hand is defined as the functionality offered by a product, application, or service to meet a particular need. Utility is often summarized as “what it does” or “its level of being fit for purpose”.

Warranty on the other hand provides a promise or guarantee that a product, application or service will meet its agreed requirements (“how it is done”, “its level of being fit for Use”). In digital-age wording ensuring digital and mobile services will always be there, wherever and whenever they are needed.

I read another interesting article a while ago that stated that Dev only produces 20% of the value that a service creates for its internal or external customers. That 20% is the actual functionality, or what the application does. This is the utility of the service, application, or product as explained above. The other 80% of the value of the service is created by Ops, ensures the service will be usable according to the customer’s needs, and will continue to be usable throughout its entire lifecycle. This is what ITIL calls warranty of the service.

Warranty includes availability, capacity, continuity and security of the service that must be implemented and maintained long after the deployment is finished and Dev moves on to their next project, or sprint.

So in the end, Ops has accountability for close to 80% of the actual value of the service for internal or external customers. That’s a lot!

Looking at DevOps, being a cultural and professional movement focusing on better communication, collaboration, and trust between Dev and Ops to ensure a balance between responsiveness to dynamic business requirements and stability, it looks more than natural that it is more Dev and must earn trust of Ops in this setting. If accountability is spread 80%-20%, then it is normal to me that the one that takes the highest risk, seeks the most trustworthy partner. Ops will seek stability and predictability to deliver the required warranty. To establish trust between Dev and Ops the handover between the two needs to be “trustworthy”. The way to establish this includes:

  • more transparency and accuracy in release and coding progress
  • more automation within the delivery process (the more manual activities in the delivery process, the lower the level of trust will be)
  • mutual understanding and respect of each other’s needs and expectations to be successful

Therefore, Lean IT and Value Stream Mapping, practices like Continuous Delivery and Continuous Deployment, all become a subset or a building block within a DevOps initiative.  DevOps is often an organic approach toward automating process/workflow and getting products to market more efficiently and with quality.

Often in bigger enterprises, applications or services tend to be highly interconnected. There is a desire to have a better decoupling and use of micro services, but for many this will take another decade or even longer to ultimately there (if at all). Dev teams often work and focus on individual applications or services, but in reality, these applications often interact with others within the production environment. Ops has the accountability to ensure services and applications remain available and functional at all times with quality.

This often means finding a workaround quickly at the front line so customers can continue working, assessing the overall impact of a change in production holistically, identifying failure root cause, etc. This all aligns nicely with what ITIL has been designed for: Best practices for managing, supporting, and delivering IT Services. There is no way the need for such practices will fade or become irrelevant in the near future, especially not in larger enterprises. On the contrary, with the introduction of new platforms (like public or private cloud, containers, IoT, or virtual machines, etc) we will see an increasing number of silos and teams, because often Dev team center around specific platforms.

Their deliverables form the micro services and applications of tomorrow, spread over multiple platforms. Ops need to ensure these services are of quality and delivering value to all customers. This requires discipline, communication, collaboration, tracking, planning and learning across all silos/teams…. ITIL still remain the best reference point for establishing such practices.

Big companies often with legacy in coding will only remain successful in the digital age if they find a good blend that fits for them between Agile, DevOps, ITIL and Lean IT. I am mentioning only these explicitly because they benefit a great momentum at present, but in fact, companies should explore best practices available and find the best blend that works effectively and efficiently for them, and ensure buy in from those affected.

This last aspect is key: teams need to build a common understanding of how DevOps is enabled by Agile, ITIL/ITSM, Lean and maybe other best practices.  It is not just about a tool, automation or continuous delivery but how we go about doing this that is key.  You need to promote, inspire and educate teams on how these practices can be used together to enable them and the company for success. 

To finish let me share my 3 reasons why ITIL remains valid into 2018:

1) ITIL remains providing a stable foundation/reference point in the evolving enterprise

Flexibility, elasticity and scalability remain key attributes of contemporary IT departments. Creating and maintaining this level of agility relies on having clear processes, a clear and accurate understanding of the current IT configuration and of course a good service design. The core principles of ITIL have been refined to help organizations establish these attributes within their technology systems, ensuring that there is a steady foundation for IT operations. Having this stable environment makes it easier to adjust the service management setup without running into any problems.

2) ITIL provides the required stability and value warranty within evolving enterprises

Businesses face more pressure than ever to maintain constant uptime around the clock, and all innovations in the world are useless if businesses are losing productivity because of system availability issues. ITIL continues to provide the reliability and stability needed to maximize the value of new technology strategies in today’s digital world. While organizations are in their digital transformation journey, they will have to support multi-speed, multi-risk, multi-platform environments and architectures. ITIL, under regular evolution and updating itself, continues to provide proven, common sense best practices to deliver stability in evolving, heterogeneous environments.

3) ITIL remains the de-facto reference set of best practices for IT service management (ITSM) that focuses on aligning IT services with the needs of customers

If you pick and choose, adopt and adapt what you find in ITIL you will learn that a lot of the content is “common sense”. Common sense will never go out of fashion.

Just be aware and accept that the need of and value to a customer goes beyond just the delivery of (isolated) functionality into a production environment.


Get an early start and try Clarive now. Install your 30-day trial here.



This video shows how Clarive can deploy War files to Tomcat, .Net applications to Windows servers and Mainframe Endevor packages with a Single Pipeline.


In the early stages, deployment is done for each technology separate(WAR, .NET, Mainframe) but for the production deployment, all 3 technologies are deployed to 3 different platforms with a single job.
All deployments are done with a SINGLE pipeline rule.

Watch the video here:

This way consistency during deployment is guaranteed.

3 Technologies(Java, .Net, Cobol programs in Endevor packages, 3 platforms (Tomcat on Linux, MS Windows server and mainframe), 3 environments(DEV, QA, PROD) deployed with jobs generated by a SINGLE pipeline rule.

The Pipeline rule can be extended with other technologies(Siebel, Sap, mobile apps), addtional environments(User acceptance, PreProd) and addtional platforms(Ios, Android)

With this Clarive is offering and end to end view on the release process, with extended dashboarding capabilities and easy tool navigation…

Get the full insight here


Get an early start and try Clarive now. Install your 30-day trial here.



A third and final way to automate delivery I will discuss is rule-driven automation.


Rule-driven automation ties together event triggers and actions as the environment evolves from state A to state B when changes are introduced.

Rules understand what changes are being delivered when, where (the environment) and how.

Rules are easy

Rules are driven by events and behavior and are fully transparent. Rules are also behind the simplest and most effective tools employed by users of all levels, from the popular IFTTT to MS Outlook, for automating anything from simple tasks to complex process. Why? Because rules are both easy to implement and to understand.

Let’s use again the analogy of software development to make the rule-driven concept clear. It reminds me of my university time when I was working with rule-based systems. At that time, we made the distinction between procedural and logical knowledge. Let me recap and explain both quickly.

Knowledge is different

Procedural knowledge is knowledge about how to perform some task. Examples are how to provision an environment, how to build an application, how to process an order, how to search the Web, etc. Given their architectural design, computers have always been well-suited to store and execute procedures. As discussed before, most early-day programming languages make it easy to encode and execute procedural knowledge, as they have evolved naturally from their associated computational component (computer). Procedural knowledge appears in a computer as sequences of statements in programming languages.

Logical knowledge on the other hand is the knowledge of “relationships” between entities. It can relate a product and its components, symptoms and a diagnosis, or relationships between various tasks for example. This sounds familiar looking at application delivery and dependencies between components, relationships between applications, release dependencies etc.

Unlike for factual and procedural knowledge, there is no core architectural component within a traditional computer that is well suited to store and use such logical knowledge. Looking in more detail, there are many independent chunks of logical knowledge that are too complex to store easily into a database, and they often lack an implied order of execution. This makes this kind of knowledge ill-suited for straight programming. Logical knowledge seems difficult to encode and maintain using the conventional database and programming tools that have evolved from underlying computer architectures.

Rules as virtual environments

This is why rule-driven development, expert system shells, rule-based systems using rule engines became popular. Such a system was a kind of virtual environment within a computer that would infer new knowledge based on known factual data and IF-THEN rules, decision trees or other forms of logical knowledge that could be defined.

It is clear that building, provisioning, and deploying applications or deploying a release with release dependencies assume a tremendous amount of logical knowledge. This is exactly the reason why deployment is often seen as complex. We want to define a procedural script for something that has too many logical, non-procedural knowledge elements.

For this reason, I believe that rule-driven automation for release automation and deployment has such great potential.

In a rule-driven automation system, matching rules react to the state of the system. The model is a result of how the system reconfigures itself:

Rule-driven automation is based on decision trees that are very easy to grasp and model, because they:

  • Are Simple to understand and interpret.” People are able to understand event triggers and rules after a brief explanation. Rule decision trees can also be displayed graphically in a way that is easy for non-experts to interpret.
  • Require little data preparation.” A model-based approach requires normalization into a model. Behaviors however can be easily turned into a rule decision tree without much effort. IF a THEN b.
  • Support full Decoupling.” With the adoption of service-oriented architectures, automation must be decoupled so that it is easy to replace, adapt and scale.
  • Are Auto scalable, replaceable and reliable.” Decoupled logic can scale and are safer to replace and continuously improve and deploy.
  • Are Robust.” Resists failure even if its assumptions are somewhat violated by variations in the environment.
  • Perform well in large or complex environments.” A great amount of decisions can be executed using standard computing resources in reasonable time.
  • Mirror human decision making more closely than other approaches.” This is useful when modeling human decisions/behavior and makes it suitable for applying machine learning algorithms.

The main features of rule-driven automation include:

  • Rules model the world using basic control logic: IF this THEN that. For every rule there is an associated action. Actions can be looped and further broken down into conditions.
    Rules are loosely coupled and therefore can execute in parallel and en masse without the need to create orchestration logic.
  • Rules are templates and can be reused extensively.
  • Rules can be chained and concurrency controlled.
  • Rules handle complex delivery use cases including decision and transformation.
  • The model is a result of how rules interact with the environment. Models and blueprints can also be used as input, but are not a requirement.

Get an early start and try Clarive now. Install your 30-day trial here.



Are you getting started with serverless code? Or are you already seasoned deploying to AWS Lambda? How fast and easy is it for you to setup a working pipeline for delivering code?


In this webinar you will discover how to go from 0 to deploying a serverless application with Clarive.

clarive serverless webinar

See how our end-to-end tool helps you organize and configure AWS Lambda functions code, configuration and pipeline execution, all encapsulated in a cloud DevOps workflow that is easy to setup and great to track from function to production.

Join us and discover with Kris Dugardyn, Principal DevOps Architect at Clarive, how you can:

  • Organize and push Lambda code to Clarive
  • Write a rulebook that automates build, test and deployment of Lambda code
  • See how to configure especial environment variables for multiple environments and track functions as they deploy
  • Create features and bug topics as your app flows increases
  • Track your deployment pipeline with boards

Reserve your seat here


Get an early start and try Clarive now. Install your 30-day trial here.



It is incredible to see how much (and increased) attention DevOps is getting in organizations today.


As DevOps tool vendor, we welcome this of course, but at the same time it also confirms that successfully implementing DevOps within organizations is not as simple as it sometimes led to believe. The obvious question is then of course: Why?

In essence DevOps is about improved collaboration between Dev and Ops and automation of all delivery processes (made as lean as possible) for quality delivery at the speed of business. If you want to learn more about DevOps and how to implement it in bigger enterprises, take a look at the 7 Step Strategy ebook on our website.

End-to-end delivery processes can be grouped into 3 simple words: Code, Track, and Deploy.

  • Code” represents those delivery tasks and processes closely aligned with the Dev-side
  • Deploy” represents the delivery tasks and processes closely aligned with the Ops-side of the delivery chain
  • Track” is what enables improvement and better collaboration: tracking progress within the entire delivery chain. To make delivery processes lean, accurate, real-time, and factual process data is required to analyze, learn and improve.

Thinking about the delivery toolchain

Many colleagues have written about the cultural aspects of DevOps and its related challenges to implement DevOps within the organization. I concur their statements and am generally in agreement with their approach for a successful DevOps journey.

To change a culture and/or team behaviour though, especially when team members are spread globally, the organization needs to carefully think about its delivery toolchain. Why? Because a prerequisite for a culture to change, or even more basic simply for people to collaborate, is that people are able to share assets or information at real-time, regardless of location.

Reality in many organizations today is that people are grouped into teams and that each team has the freedom to choose their own tooling, based on platform support, past experience, or just preference. As a result, organizations very quickly find themselves in a situation where the delivery toolchain becomes a big set of disconnected tools for performing various code and deploy tasks. The lack of integration results in many manual tasks to “glue” it all together. Inevitably they all struggle with end-to-end tracking and a lot of time and energy is wasted on this. I often see this even within teams, because of the plenitude of tools they installed as delivery toolchain.

clarive application delivery

Clarive Lean Application Delivery

SIMPLICITY is the keyword here.

Funny enough this is a DevOps goal! Recall that DevOps is said to apply Lean and Agile principles to the delivery process. So why do teams allow product-based silos? Why do they look for tools to fix one particular delivery aspect only, like build, deploy, version control, test, etc.?

Instead of looking for (often open source) code or tools to automate a particular delivery aspect, a team or organization should look at the end-to-end delivery process, and look for the simplest way to automate this, and importantly __without manual activities!

We believe that with workflow driven deployment teams can get code, track, and deploy automated the right way: simplified and integrated!

Workflow driven deployment will allow teams to:

  • Use discussion topics that make it simpler to manage and relate project activities: code branches are mapped 1:1 with their corresponding topic (Feature, User Story, Bugfix, etc.) making them true topic branches. This will provide strong coupling between workflow and CODE.
  • Track progress and automate deployment on every environment through kanban boards. Kanban Boards allow you to quickly visualize status of various types of topics in any arrangement. Within Clarive, Kanban topics can be easily grouped into lists, so that you can split your project in many ways. Drop kanban cards on a board simply into an environment to trigger a deployment. Simple and fully automated! This will provide strong coupling between workflow and DEPLOY automation onto every environment.
clarive kanban

The Clarive Kanban makes tracking progress easier

  • Analyze and monitor status, progress and timing within the delivery process. It even makes it possible to perform pipeline profiling. Profiling allows you to spot bottlenecks and will help you to optimize pipelines and overall workflow using execution profiling data. All data is factual and real-time! This will provide you with ultimate TRACK information within the delivery process.

The Right way to go

Why is workflow driven deployment the right way to go? Because it breaths the true objectives of DevOps: better and seamless collaboration between Dev and Ops with automation everywhere possible at the toolchain level, not only at the process/human level. This makes a big difference. I believe that a lot of companies continue to struggle with DevOps simply because they are shifting their collaboration and automation issues from their current processes and tools to a disconnected DevOps toolchain exposing similar and new problems. As a result, they become skeptical about the DevOps initiative …. And blame the toolchain for it!! (always easier than blaming people)

A DevOps platform that enables true workflow driven deployment blends process automation with execution automation and has the ability to analyze and track automation performance from start till end. This is what you should look for to enable DevOps faster: A simplified, integrated toolchain that gets the job done with transparency, so organizations can concentrate on the cultural and people related aspects.


Get an early start and try Clarive now. Install your 30-day trial here.


Clarive SE 7.0.10 Release Notes

Release date: 5 December 2017

We’re pleased to present our latest Clarive SE product release: 7.0.10. This release contains a number of minor fixes and improvements from 7.0.9. We are also excited that this release includes two unique brand new features.

Feature Highlights

  • Custom Kanban swimlanes

  • User Inteface enhancements

Custom Kanban swimlanes

The Clarive DevOps Kanban is the most flexible and comprehensive board for managing and tracking delivery. In this release we are including a substantial representation improvement: Now you can create and customize your own swimlanes.

Custom swimlanes can be defined only by the board owner. Users can either clone a board or notify the owner if they want to customize a swimlane

Configuration has been made simple: You can select the fields that you want to use as swimlane and select the values that can be used as a lane in the Kanban board.

In 7.0.10 introduces a new type of swimlane, release parent topic.

With this you can get for example a quick glimpse of what features are included in each release by simply opening the board. Of course, you can filter according your preferences in swimlane mode.

Kanban boards allow dragging and dropping of topics from one lane to another across all swimlanes. Such an event, as expected, will update topic data and relationships automatically.

Kanban view with parent topic swimlane can allow you move topics from one Sprint/Release to another one easily.

User Interface enhancements

User experience is key for us, so the product team is continually focusing on how to improve the Clarive interface and UX. At present we are making small changes such as increasing menu paddings, buttons redesign, and overall layout distribution. Our greatest effort in UI enhancements in this release has gone into improving two screens: the job monitor and the topic list.

In the pipeline Monitor you can now even more easy identify the relations between code, topics, environment and deployments, job-by-job, with links to log data, generated artefacts and job profiling and scheduling information.

With the new right side row action menu we simplified the job action UI and the way to interact with each job row from the row itself.

Job statuses can have colors in order to better identify and distinguish what is the current status of your deployment. All our job filters are still available in the toolbar.

Click on job or go to Menu->Deploy to have a look to job dashboard. See full job log clicking on Menu->Log.

In the Topic list we simplified the toolbar moving topic actions under the “more options” menu . “More Options” basically covers any action not related to view display.

As with the monitor, we want the user to easily correlate deployments and topics, so this release includes a new column that assists users to determine the current status quickly.

 

Each circle icon has a specific meaning:: “MR” is Merged (green if the topic has been merged), “CI/CD” for Continuous Integration/Deployment, and “NB” indicates the release Nightly Build status.

Improvements and issues resolved

  • [ENH] – Cleanup and delete expired sessions in purge
  • [ENH] – Get free license website opened in new tab
  • [ENH] – Round user avatar
  • [ENH] – New server log colors
  • [ENH] – Publish internal plugins
  • [ENH] – Remove repository revisions in repository deletion.
  • [FIX] – Small profile changes
  • [FIX] – Rulebook sed operation tests
  • [FIX] – Allow all REPL languages unless role action limit it

What else is new?

We are now sharing our documentation on Github. We appreciated your valuable help and feedback to improve our documentation. Pull requests are welcome, just clone and/or submit them through our github site.

You can also directly edit the documentation through the documentation page.

Ready to upgrade?

Just follow the standard procedure for installing the new version. Click here to download and install the latest version.

Acknowledgements

Join us in our Community for product suggestions, feature requests and bug reports.

Thanks to everyone who participated in delivering these great release.


Visit our documentation to learn more about the features of Clarive.