Spoiler alert: if you like short product cycles to deliver features that make your users very happy, you probably already know the answer.


We’ve just released a great guide to help startups getting started with delivering lean software. Get your copy here if you haven’t already.

Lean application delivery is the part of lean startup methodology that deals with how software products are built, how to prioritize, how to track and measure and how to automate every aspect of the pipeline.

It’s basically the marriage of DevOps and your workflow: be it agile, kanban, scrumban or, yes, waterfall.

Lean Delivery:
Lean DevOps?
Lean Agile?
KanbanOps?

As we say in the guide, this is about your journey to lean nirvana: a just-in-time flow to deliver value to your users.

Tools like Trello

Postits on a wall or simple tools like Trello are a great way to start lean. Trello is no frills, low features, but sucks when you need to do anything that transcends organizing simple tasks. But it does get a team collaborating on goals fast.

Github, Gitlab or Bitbucket can get you coding and building stuff… but what stuff? For whom? Why? How do you deliver it? How do you align with the business plan?

While marketing and sales are getting their job done with HubSpot, engineers and product people are fiddling with the gruesome toolchain.

Are the tools running your team, or the other way around? Aha, Jira and Pivotal Tracker can get you very far. Heck, so far that you probably could spend your whole life in just perfecting yourself with these tools. But that’s not how a startup works. There’s no time. You need to define products, get people building, and you probably would need at least 4 or 5 tools just to:

  • Define product, align with goals and user value;

  • Code, track, deploy;

  • Automate, measure, iterate!

You need your product people to collaborate. And you need to automate everything. From goals, to ideas, to the DevOps pipeline. Just get your team to deliver software the way it’s meant to be: lovable products that iterate fast.

Batteries Inside

Our guide covers a few of our favorite topics in lean delivery, enough for a quick 10 minute read through. Here’s hint of some of the topics covered in the guide:

Define your flow before picking your tools

Or how to avoid bloating your startup with out-of-the-box, out-of-place processes from picking tools before you pick the process.

Delivering software is not just an engineering thing

Or how to build traction and make products your users love and business sells.

Emotion is a gauge

Be sensitive and measure emotion correctly.

Have a place for ideas

Or how to nurture and follow up on things that will bring value to your users.

MVP all the time: break down work

Or how to deliver value while keep your team and users motivated.

Isolate changes

Or how to be able to put a release together just-in-time instead of building really huge “develop” branches.

Measure your process, improve fast

Or how to keep your team delivering frequent releases.

Eat your own dog food

Or how to avoid delivering software that will bug your users down.

Avoid release anxiety

Or how to fine tune your iteration so that users, engineers and management is continuously happy.

It’s only done when it’s in production

Do I need to say more?

I hope this is good enough to get you started. There’s a lot of literature out there on how to build your startup to be lean and mean. Learn, measure, iterate!


Go get your Guide to Lean Delivery.



Given the fact that the EU’s GDPR law (replacing the 1995 Data Protection Directive) will go into effect on May 25, almost every software vendor is jumping on the bandwagon to explain how they can help.


Let me be compliant, and follow the stream.

gdpr and Clarive

When reading about the subject online, I noticed that most articles center around the specific business and legal obligations regarding personal data. These articles focus on physical data processing and the data controller obligations to manage processing. Of course!

This is not what I want to write about in this blog however. The GDPR is also expected to impact the software delivery life cycle and its related IT-development processes for organizations that plan to rollout IT projects within the EU.

There are many lifecycle flavors to deliver software today on the market, all waterfall, iterative, or agile based. All of them define the way to manage and control the IT project, from planning to rollout, across the different application layers or modules, and platforms. Common software layers that will be directly impacted by the new GDPR law include of course databases as well as their related architecture, but also data transport, data security, presentation, and application layers… basically potentially every software aspect could be affected!

The impact of GDPR on application delivery

This means if your company intends to continue to roll out systems in the EU, you will have to deal with the new functional and technical requirements introduced by the GDPR like the following (this is not an exhaustive list, only some important ones to make the point):

  • Ensure data protection in the system and the organization, by design and by default (Recital 78 and Article 25)
  • Use data encryption when possible (Recitals 83 and Articles 6-4(e), 32-1(a))
  • Use Data pseudonymization when possible (Recitals 26, 28, 29, 78 and Articles 6-4(e), 25-1, 32-1(a))
  • Anonymize data when possible (Recital 26)
  • Share processing attributes and steps to the data subject in an easy to understand form at the time of data collection, electronically or in writing (Recitals 39, 58 and Articles 12-1, 13-2(a-f))
  • Make data portable to another provider (maybe competitor) (Recital 68 and Articles 13-2(b), 14-2(c), 20)
  • Ensure data is secured, and integrity and confidentiality are maintained, using technical and organizational means under the management of the controller (Recital 49 and Articles 5-1(f), 32-1(b-d))

While a number of these new requirements might be seen as “no-brainer” as they were already part of your software design, others will trigger new requirements that need to be implemented fast and with quality before the law is enforced.

Failing seems to be not really an option. Not complying with the GDPR requirements could result in very serious penalties! As I could read, the worst-case scenario could a fine of €20 million or 4 percent of the company’s previous year’s total global revenue, whichever is greater. Ouch!

The clock is ticking, how do you track progress and ensure compliance?

With only a few more months left, how are you progressing with the delivery of these new requirements? Can you truly track requirement progress throughout your software delivery chain? How confident are you all policies are correctly implemented?

When speaking to bigger clients, this is often their biggest challenge: They have deployed multiple tools to support software delivery. Coding is fragmented, and the delivery toolchain is often poorly integrated, leading to extensive manual activities within the delivery process and a lack to end-to-end visibility and traceability.

At Clarive we believe in SIMPLICITY. Your software delivery toolchain should be as simple as possible, requiring the minimal set of tools to get the work done fast, with quality, and in a transparent way. For smaller organizations and startups, this can be a single tool: Clarive! Bigger organizations often do need multiple tools to support multiple platforms, but they miss overall orchestration and automation. Not those that use Clarive!

As a simple, lean application delivery platform, Clarive will deliver you the traceability you need to track progress on your GDPR (and other) requirements with ease.

Clarive not only helps with end-to-end tracking, its powerful role-based, ruling and workflow automation system also offers capabilities that will help you to ensure everyone on the team remains in compliance with the company’s legal and other requirements, like those for GDPR. For example:

  • Workflow rules: Workflow rules allows you to accept/reject code or actions that do not comply with company policies. For example, our support for code reviews ranges from static code analysis-based decision tree rules to multi-level acceptance approvals within the delivery process.
  • Role based security: Permissions can be set very granularly according to the role members have in respect to the project.
  • Cross platform & process Automation: The best way to ensure compliance it to AVOID manual interventions. Clarive allows you to automate every delivery execution step (apart from the coding itself of course) and process workflow. We support this across teams and platforms, making manual activities (other than just approvals) redundant.

Sounds great? Why don’t you take a look at Clarive now? As our customers witness, you can get started quickly. Just download your 30-day trial here and try it out yourself.


Today we’re going to see how to deploy an application in Google Play Store with Clarive.


In the following post we’ll see how to automate the compilation of our applications, as well as making its subsequent upload to the Play Store from a mobile app completely automatic. Clarive will be the only tool we’ll use throughout the whole process.
All of this will save you costs, as you will avoid the costs of manually carrying out the compilation and deployment each time a new version of an application is launched.

To develop this, we install a 30-day trial Clarive instance. Through the use of Docker containers we will be able to compile and deploy an Android application in Google Play Store by using the rulebook that we will explain below.
The whole process will be managed through what we have configured in the file .clarive.yml that is in the root folder of our repository.

Requirements

In order to complete this process there are some requtirements, which are as follows:
– The Play Store .json file in order to be able to upload the application automatically.
– The application ready to be compiled and signed automatically.
– A Clarive instance installed, which you can get here

Designing our .clarive.yml

The .clarive.yml file will be where we will define the steps that should be followed for the compilation and deployment process.

Defining our variables

First we will declare the variables that will be used throughout our pipeline process.

vars:
  artifact_path: http:///artifacts/repo/
  # Root path to our artifact repository.

  artifact_repo: "public"
  # Name of repository

  artifacts_store_path: "android/app/app-release-{{ ctx.job('change_version') }}.apk"
  # Path inside the repository where the generated APK will be stored

  json_file: "clarive-rulebook.json"
  # Name of our JSON file for its uploading to Play Store

  workspace: "{{ ctx.job('project') }}/{{ ctx.job('repository') }}"
  # Workspace where our development files will be stored

  package_name: codepath.apps.demointroandroid2clarive
  # Pack name for our app in the Play Store

Building our application

Next is the BUILD phase, and here we are going to compile the application and save the generated file in our artifact repository.

Our build.gradle file must be prepared for the automatic digital signing of the application and and its subsequent uploading to the Play Store. In the same way, we must have also previously manually uploaded a first version of the application to the Play Store.

An image with gradle and the SDK source of Android should be enough to make the compilation.

After specifying the Docker image that we will use, we will execute the gradle command within our working directory, so that the compilation and signature of the application can be carried out.

In our particular case, we should use the root user and the sh shell of the image we are using in order to perform the compilation of the application.

build:
  do:
    - image:
         name: 'knsit/gradle-android'
         user: 'root'
         runner: 'sh'

     - shell [Compile application]: |
         cd {{ workspace }}/Application_code/app/
         gradle AssembleRelease

To complete the BUILD phase, in which we have compiled the application, we need to save the generated APK file, where the compiled application is located, in our artifact repository:

    - artifact_repo = publish [Store APK in artifacts repository]:
        repository: Public
        to: '${artifacts_store_path}'
        from: '{{ workspace }}/Application_code/app/build/outputs/apk/app-release.apk'
    - log:
        level: info
        msg: Application build finished

Once the file has been saved and stored we will have terminated our BUILD phase, and we will now move on to the DEPLOY phase in order to carry out the deployment of the file.

Deploying to Play Store

In this phase, we need our APK file with the compiled application, and our .json Play Store authentication file in order to carry out the upload automatically.

In this case, the image that we need to make the deployment will be a Docker image using the installed Fastlane, and here we are going to prepare the command we need to execute in order to deploy our application in Play Store.
In this case we will be uploading it to the Alpha phase of the applications.

Our .json file is located within our development files where the application is, which means we can place it within our workspace.

deploy:
  do:
    - image:
        name: 'levibostian/fastlane'

    - shell [Upload application with Fastlane]: |
        fastlane supply --apk .artifacts/{{ artifact_repo }}/{{ artifacts_store_path }} -p {{ package_name }} --json_key {{ workspace }}/{{ json_file }} -a alpha

    - log:
        level: info
        msg: The app has been deployed to the Play Store

In this way, if we carry out the push to our repository, Clarive will automatically run a deployment (CI) and our application will be uploaded onto our Play Store.

Notifications

Finally, in the POST step we will email the user that has launched the deployment to inform them that it has been completed, and in order for them to check the results.

post:
  do:
    - log:
        level: info
        msg: Deploy finished
    - email:
        body: |
          Hello {{ctx.job("user") }},
          <b>{{ ctx.job('name') }}</b> has finished and your app has been deployed. Check it out in <a href= "https://play.google.com/">your Play Console</a>.

          Also your apk file has been stored in your artifacts repository:
              ${ artifact_path }${artifact_repo}/${artifacts_store_path}.

        subject: Application deployed to Play Store
        to:
           - ${ctx.job("user")}

To conclude

Now with our .yml file already prepared, we can use Clarive’s interface to visualize the steps that will follow the finalizing of the rulebook.
To start the deployment, we only need to perform the push on the repository, just as we have seen before in other posts. When performing the push, Clarive automatically creates a deployment (Continuous Integration) and executes all the code found in the .yml.

If everything has run correctly, we should be able to see on the monitor how our deployment is being executed.

Job succesfully finished

By doing all of this, we have carried out the whole process of compiling and uploading our application to Google Play Store through the use of Clarive’s rulebooks and different Docker containers.
If we look at our page on the Play Store console, we’ll be able to see a message indicating that we have an application ready and waiting.

APK v2 deployed

APK details

This is a brief example that can serve as a reference. You can configure different environments to deploy on. You can also change the type of operation to be carried out in each phase in a completely customizable way, in such a way that it adjusts to what each person needs for their development work and deployments.


Visit our documentation to learn more about the features of Clarive.



In previous videos we have deployed Mainframe Git managed application, Java application and Mainframe Endevor packaged up till the QA environment.


All of the deployments for each of the technologies and target platforms were done with Jobs created by the same versioned pipeline rule. Now that we have reached the final deployment to the production environment, we don’t want to deploy individual changesets anymore. We want to group them into a release/sprint and deploy them together.

We will do that in a single job created by the same pipeline as the one used for the changesets.
In summary, with Clarive we deploy in a single job from a single pipeline, multiple technologies to multiple environments on multiple platforms. This assures us a consistent way of deploying.


Visit our documentation to learn more about the features of Clarive.



In this video we will create a webhook in our rulebook that provisions a VM in our Azure instance.


Following the instructions given in the previous blog “DevOps Webservices: a Clarive primer”, we show how easy is everything in Clarive:


Get an early start and try Clarive now. Install your 30-day trial here.



Enterprises are in constant search of ways to deliver faster, ideally in a continuous/frequent fashion, with full traceability and control, and of course with excellent business quality.


DevOps as an approach continues to get a lot of attention and support in achieving these goals.

As readers can find in other blogs and articles on DevOps, DevOps aims at bringing Development and Operations closer together, allowing better collaboration between them, and facilitating a smoother handover during the delivery process. Automation remains a critical component in the technical implementation.

What strikes me all the time is that, when I discuss the subject with customers, analysts, and other colleagues in the field, very quickly we seem to end up in a DevOps toolchain discussion that gets cluttered by numerous best of bread but point products that one way or another need to integrate or at least work together to get the delivery job “done”.

Why is that? Why does a majority end up with (too) many tools within the delivery toolchain?

We fail to search for simplicity

If you read about what analyst like Gartner or Forrester are writing about implementing DevOps, if you read closer about what Lean IT stands for, then a common theme that will surface is SIMPLICITY.

If you want to enhance collaboration between delivery stakeholders, if you want to make the handover of deliverables easier, if you want to automate the end-to-end delivery process, then you should look for ways to make your delivery toolchain simpler, not more complex.

As part of the analysis and continual improvement process of the delivery value stream, we look for better ways to do specific tasks. We should in addition carefully look at alternatives to avoid manual activities in processes when possible. This is just applying common Lean practices in the context of application delivery.

Many (bigger) enterprises remain overly siloed, and this often results in suboptimal improvement cycles. When developers face issues with the build process, they look on the web for better build support for their specific platform, ideally in open source, so they can “tweak” it for their needs if required (it is often a matter of retained “control”). If quality suffers, developers and testers can do their own quest to improve quality from their viewpoint, leading to the selection and usage of specific point products by each team, sometimes not even aware of their respective choice.

I can continue with more examples in the same trend, but the pattern is obvious: When teams continue to look for the best solution “within their silo”, then most of the time organizations will end up in an overly complex and tool rich delivery toolchain.

Look at delivery in a holistic way, from a business perspective

The above approach is not respecting some important Lean principles though: Look at the value stream from a customer’s perspective, in a holistic way, creating flow while eliminating waste.

These are some of the things you should look at while analysing and improving your delivery toolchain:

  • How does demand/change flow into the process? How is the selection/acceptance process handled? How is delivery progress tracked?

  • How automated are individual delivery steps (like build, provision, test, deploy)? How is the delivery process/chain automated itself? Any manual activities happening? Why? Does automation cover across ALL platforms, or only a subset?

In case you would like to learn around this subject, I can recommend reading the following ebook on the Clarive website: “Practical Assessment Guide for DevOps readiness within a hybrid enterprise

Clarive CLEAN stack

A C.L.E.A.N way to deliver quality

At Clarive we believe simplicity is vital for sustained DevOps and delivery success.
We designed the C.L.E.A.N stack exactly with this in mind:

Clarive Lean & Effective Automation requiring Nothing else for successful delivery.

Indeed, Clarive allows you to:

  • Implement Lean principles and accurate measurement and reporting with real-time and end-to-end insight

  • Implement effective and pragmatic automation of both delivery processes as well as delivery execution steps such as build, provision, test, and deploy.

  • Do all this from within the same product, so there is no need to use anything else to get the job done!! No real need to implement artefact repositories, workflow tools, or anything else, just Clarive will do!.

Of course, in case you have made investment in tooling already, Clarive will collaborate in a bi-directional way to get you started quickly. After all, DMAIC or other improvement cycles are cyclic and continual, so you can further refine or improve after you got started if you desire more simplicity…

This is an evolution I have seen many of our clients going through: they initially look at and start with Clarive because they have certain automation or orchestration needs. Then they find out they can do with Clarive what they did with Jenkins, and switch to Clarive, then they learn about Clarive’s CI repository and decide to eliminate Nexus. As Clarive has a powerful and integrated workflow automation capability, they realise they could also do without Jira and Bitbucket… and so on. It has saved companies effort and cost doing so.

In case you are interested in Clarive, start with the 30-day trial.

See also some sample screenshots of the tool below.

Clarive tool

Clarive tool_screenshot

Clarive tool_screenshot_deploy package


Visit our documentation to learn more about the features of Clarive.


In today’s fast-moving world of DevOps, we need ARA more than ever to take control over complexity to deliver a great workflow that can unite teams at different speeds in the enterprise.

These slides go over why, when and how ARA may apply to your enterprise.

 


Visit our documentation to learn more about the features of Clarive.



So now we can write webservices with Clarive rulebooks. We call them webhooks, but they are actually inbound.


Let me explain. Let’s start with the problem at hand.

The Problem at hand

With Clarive you can write your automation as rulebook files. (You can also write them visually with Clarive EE, but that’s another story.) Rulebooks can do wonderful things, like automating your pipeline: building, testing and deploying apps. They can also be used to provision your infrastructure (think cloud instance or DB) and can be triggered by many different rules, like a topic modification or a branch being pushed. They can also be scheduled to run on a cron.

But what if you want to trigger them from an outside event? Say when a user opens an issue on your issue tracker, or someone pushes something to a repository somewhere? That’s when we wanted to trigger a rule in our rulebook. To do that you can call into Clarive. But instead of running a generic Clarive API that calls a rulebook, we wanted to expose meaningful urls that would make more sense of

What is a Rulebook

Rulebook is a file (or set of files) checked into a Git repository in your Clarive instance. It all starts with the .clarive.yml file.

Your rulebook will contain rules. Rules can be build, test, deploy, or events like topic_modify or repository_update. And then they can be webhook rules. Webhook rules are any events that start with a / slash character. The event name will become an url fragment to be called into.

Webhook rules behave like exposed webservices right into your instance, running within your repository.

Here’s an example:

/hello_world:
    - echo: running the hello world webhook

       # do some meaningful stuff here

    - web_response:
       # return something to the caller

Anatomy of a web call

When you call into a rule webhook introducing a url like this:
https://{myclariveserver}/rule/json/{MyProject}/{MyRepo}/hello_world?api_key={my_user_api_key}

It goes through the following steps:

  • Authenticates the user (users must be authenticated – via api_key).
    your API key

    This is your user API key

  • Locate your project MyProject and the repository associated MyRepo

  • Find .clarive.yml file
  • Look for a webservice defined with / and called hello_world
  • Execute the echo code and return the result in web_response

A hardcore example: provision an Azure VM

To run this example you can create an instance in our 30-day trial. That will give you the complete infrastructure you need.

  1. Get a free Azure account if you don’t have one already.
  2. Get your 30-day trial Clarive instance
  3. Once you get an email with the instructions, you can login in your Clarive.
  4. Create a new project and a git repository associated to it.
  5. Create a story from your project, and that will create automatically a new branch in your repo
    your topic

    Create a topic in Clarive

  6. Create your azure variables that you will need to login using a service principal

    Azure vars

    Your secret vars that allow to login in your Azure

  7. Clone this repo in your local workspace, checkout the actual branch and modify your .clarive.yml with the webservice.

    /create_azure_vm:
     image: 
        name: microsoft/azure-cli #docker image to run az commands
     do:
        - myoutput = shell: |
            az login --service-principal -u {{ ctx.var('az-service-principal') }} --password {{ ctx.var('az-password') }} --tenant {{ ctx.var('az-tenant') }}
            az vm create --resource-group {{ ctx.var('az-resource-group') }} --name myFirstVM --image centos --admin-username {{ ctx.var('az-vm-username') }} --admin-password {{ ctx.var('az-vm-password') }} --authentication-type password
        - if "{{ myoutput.rc == 0 }}"
        then:
           - result =: "VM created succesfully!"
        else:
           - result =:"Upps! something went wrong, review your azure instance."
        - web_response:
           body: ${result}
    
  8. Push the changes (after commiting them, obviously) into your branch, then you have to take your topic to PROD environment where your branch will be merged in master or do it manually.

  9. Your defined webservice will be available so call it with your browser or in your command line using “curl”.

    curl https://{your_instance}.clarive.io/rule/json/{your_project}/{your_repo}/create_azure_vm?api_key={your_user_api_key}
    
  10. Finally, you can see your VM created in your Azure instance.

    Your VM created in Azure with a Clarive webhook

That’s it. You’ve just created your first DevOps webservice in Clarive 🙂


Visit our documentation to learn more about the features of Clarive.