Event-driven scripting in Kubernetes with Brigade

In most projects that I’ve been part of, sooner or later the need for various types of automation jobs arises. For example cleaning up old files, moving database backups, running health checks or system tests and so on.

Historically we’ve implemented these tasks using for example the Windows task scheduler, or through some custom Windows Service app. More recently, we’ve been using Azure Automation jobs for this. Sometimes it can also make sense to use CI/CD automation tools like Azure DevOps for these jobs.

With the move to containers and Kubernetes, it can make a lot of sense to use that platform not just for the business apps that you are developing, but also for these type of automation workloads. It means that you don’t have to invest and manage another platform, and you can leverage existing and 3rd part container images to build automation workflows.

Brigade

Brigade is a platform that makes it easy to create simple or complex workflows that run on Kubernetes. You use Docker containers as the basis for each step in the workflow, and wire them together using Javascript.

Brigade

Brigade is an open-source project, read more about it at:
https://brigade.sh/

Brigade runs on any vanilla Kubernetes cluster,  you don’t need anything extra installed to run brigade pipelines.

Installing Brigade is as easy as running the following two commands:

helm repo add brigade https://brigadecore.github.io/charts
helm install brigade/brigade --name brigade-server

The image below shows the main concepts in use by Brigade:image

Project
For every automation workflow that you want to implement, you will create a project. Every project has some metadata attached to it, such as id, name and so on. It also either contains or reference the Javascript code that contains the pipeline logic.

Build
A build is created every time a script is triggered, through some  external event. The build runs until all jobs are finished, and you can view the output logs from the running build as well as after it finished.

Job
Each build will contain one or more jobs. For each job, a container instance is started, and then a series of tasks is executed inside that container. You specify the jobs and the tasks in the Javascript code, and how the jobs should be scheduled.

Gateway
A gateway transform outside triggers (a Git pull request, a Trello card move etc) into events, that is passed into the pipeline where you will handle them in your code.

Brigade comes with a Generic gateway that listens and accepts POST JSON messages on any format (it also explicitly supports the CloudEvents format). In addition, there are several custom gateways that makes integration a lot easier with services such as GitHub, Docker Container Registry or Azure Event Grid.

A basic “hello-world” type of Brigade pipeline can look like this:

const { events, Job } = require("brigadier");

//Handler for exec event
events.on("exec", () => {

  var job = new Job("say-hello", "alpine:3.8");
  job.tasks = [
    "echo Hello",
    "echo World"
  ];

  job.run();
  
});

Here, the pipeline is triggered by the exec event, and inside that event handler it starts a new job called “say-hello” which contains two tasks where each task just prints a message. The job is executed inside a container from the alpine:3.8 image, that will be downloaded from Dockerhub and started automatically for you. Of course you can use any public image, or a private image from your own container registry.

Brigade has excellent documentation, I encourage you to read up on it more at https://docs.brigade.sh/

In this post I will show a slightly more complex example, that is taken from a recent customer project where we developed a microservice application running on Kubernetes, and found the need for some extra automation.

Removing Kubernetes environment on PR completion

Kubernetes makes it easy to create new isolated environments for your application when you need to. A common desire of many teams is to deploy the application into a fresh environment every time a pull request is created. This lets the team and stakeholders test and verify the feature that is being developed, before it gets merged into the master branch.

Using Azure DevOps, it’s quite easy to setup a release pipeline where every PR is deployed into a new namespace in Kubernetes. You can enable stages in a pipeline to be triggered by pull requests, and then use information from that PR to create a new namespace in your Kubernetes cluster and then deploy the app into that namespace.

The problem we experienced recently at a customer with this was, how can we make sure this namespace (and everything in it) is removed once the PR is complete and merged? We can’t keep it around since that will consume all the resources eventually in the cluster, and we don’t want to rely on cleaning this up manually.

This turned out to be a perfect case for Brigade. We can configure a service hook in Azure DevOps, so that every time a PR is updated we trigger a Brigade pipeline. In the pipeline we check if the PR was completed and if so, extract the relevant information from the PR and then clean up the corresponding namespace. To do this, we used existing container images that let us run helm and kubecl commands.

The Brigade script looks like this:

const { events, Job } = require("brigadier");
const util = require('util')

const HELM_VERSION = "v2.13.0"
const HELM_CONTAINER = "lachlanevenson/k8s-helm:" + HELM_VERSION;

const KUBECTL_VERSION = "v1.12.8";
const KUBECTL_CONTAINER = "lachlanevenson/k8s-kubectl:" + KUBECTL_VERSION;

events.on("simpleevent", (event, project) => {
    const payload = JSON.parse(event.payload);
    const prId = payload.resource.pullRequestId;

    if (!payload.resource.sourceRefName.includes('/feature/') && !payload.resource.sourceRefName.includes('/bug/')) {
        console.log(`The source branch ${payload.resource.sourceRefName} is not a /feature/ or /bug/ and is therefore skipped.`)
        return;
    }

    if (payload.resource.status !== "completed" && payload.resource.status !== "abandoned") {
        console.log(`PullRequest not complete or abandoned (current status: ${payload.resource.status}).`);
        return;
    }

    var helm_job = new Job("helm-delete-release", HELM_CONTAINER);
    helm_job.env = {
        'HELM_HOST': "10.0.119.135:44134"
    };
    helm_job.tasks = ["helm init --client-only", `helm delete --purge samplewebapp-${prId}`];

    var kubectl_job = new Job("kubectl-delete-ns", KUBECTL_CONTAINER);
    kubectl_job.tasks = [`kubectl delete namespace samplewebapp-${prId}`];

    console.log("==> Running helm_job Job")
    helm_job.run().then(helmResult => {
        console.log(helmResult.toString())

        kubectl_job.run().then(kubectlResult => {
            console.log(kubectlResult.toString());
        });
    })
});

events.on("error", (e) => {
    console.log("Error event " + util.inspect(e, false, null))
    console.log("==> Event " + e.type + " caused by " + e.provider + " cause class" + e.cause + e.cause.reason)
})

events.on("after", (e) => {
    console.log("After event fired " + util.inspect(e, false, null))
});
 

This code is triggered when the “simpleevent” event is triggered. This event is handled by the generic gateway in Brigade, and can be used to send any kind of information (as a json document) to your pipeline. To trigger this event, we configure a service hook in Azure DevOps for the Pull Request updated event, and point it to the generic gateway:

SNAGHTMLae0651d[4]

The full URL looks like this:

https://brigadedemo.ehn.nu/simpleevents/v1/brigade-55cbf57f7aaeb59afa1fe4d33ca6a5a635eefe060b057c423c97a0/somesecret

The URL contains the project id and the secret that were specified when creating the project. This is how external requests is authenticated and routed to the correct brigade script.

Inside the event handler we use two different container images, the first one is for running a Helm command to delete the Kubernetes deployment. Since Helm can’t delete the namespace, we need to run a second job inside another container image that contains the Kubectl tool, where we can delete the namespace by running

kubectl delete namespace samplewebapp-${prId}`

The prId variable is parsed from the PullRequest updated event coming from Azure DevOps. We use the id of the pull request to create a unique namespace (in this case pull request with id 99 will be deployed into the samplewebapp-99 namespace).

NB: You will need to make sure that the service account for brigade have enough permission to delete the namespace. Namespaces are a cluster level resource, so it requires a higher permission compared to deleting a deployment inside a namespace. 

One easy way to do this is to assign a cluster-admin role to the brigade service account, this is not recommended for production though.

Now, when a PR is complete, our pipeline is triggered and it will delete the deployment and then the namespace.

To view the running jobs and their output, you can either use the brigade dashboard (called Kashti) by running brig dashboard or you can install the brigade terminal which will give you a similar UI but inside your favourite console.

Here is the output from the PR job in the brigade terminal:

image

It shows that two jobs were executed in this build, and you can see the images that were used and the id of each job. To see the output of each job, just return into each job:

image

Here you can see the the output of the helm job that deletes my helm deployment for the corresponding pull request.

Summary

I encourage you to take a look at Brigade, it’s easy to get started with and you can implement all sorts of automation without having to resort to other platforms and services. And although Javascript might put some people off, the power of a real programming language (compared to some DSL language) pays off when you want to implemtent something non-trivial.

If you already are using Kubernetes, why not use it for more things than your apps!

Thanks to my colleague Tobias Lolax (https://twitter.com/Tobibben) who did the original implementation of this for our customer.

Deploy Azure Web App for Containers with ARM and Azure DevOps

Using Docker containers  for building and running your applications has many advantages such as consistent builds, build-once run anywhere and easy standardized packaging and deployment format, just to name a few.

When it comes to running the containers you might look into container orchestrators such as Kubernetes or Docker Swarm. Sometimes though, these orchestrators can be overkill for your applications. If you are developing web applications that have only a few dependent runtime components, another options is to use Azure Web App for Containers, which is a mouthful for saying that you can use your beloved Azure Web Apps with all the functionality that comes with it (easy scaling, SSL support etc), but deploy your code in a container. Best of both worlds, perhaps?

In this post I will show how you can create an ARM template that creates the Azure Web App with the necessary setting to connect it to an Azure Container Registry, and how you setup a Azure Pipeline to build and deploy the container.

The code for this blog post is available on GitHub:

https://github.com/jakobehn/containerwebapp

The release definition is available here:
https://dev.azure.com/jakob/blog

Prerequisites

  • An Azure subscription (duh)
  • An Azure Container Registry
  • An Azure DevOps project

Creating the ARM Template

First up is creating an ARM template that will deploy the web app resource to your Azure subscription. Creating an ARM template for a web app is easy, you can use the Azure Resource Group project in Visual Studio (this template is installed with the Azure SDK) and select the Web app template:

image_thumb1 


Now, we need to make some changes in order to deploy this web app as a container. FIrst of all we will change some settings of the App Service Plan.

Set the “kind” property to “linux”, to specify that this is a Linux hosted web app (Windows containers for Web Apps are in preview at the moment).

Then we also need to set the “reserved” property to true (The documentation just says: ‘If Linux app service plan true, false otherwise’ Smile )

{
   “apiVersion”: “2015-08-01”,
   “name”: “[parameters(‘hostingPlanName’)]”,
   “type”: “Microsoft.Web/serverfarms”,
   “location”: “[resourceGroup().location]”,
   “kind”:  “linux”,
   “tags”: {
     “displayName”: “HostingPlan”
   },
   “sku”: {
     “name”: “[parameters(‘skuName’)]”,
     “capacity”: “[parameters(‘skuCapacity’)]”
   },
   “properties”: {
     “name”: “[parameters(‘hostingPlanName’)]”,
     “reserved”: true
   }
},

For the web app definition, we need to set the “kind” property to “app,linux,container” to make this a containerized web app resource. We also need to set the DOCKER_CUSTOM_IMAGE_NAME to something. We will set the correct image later on from our deployment pipeline, but this property must be here when we create the web app resource.

{
   “apiVersion”: “2015-08-01”,
   “name”: “[variables(‘webSiteName’)]”,
   “type”: “Microsoft.Web/sites”,
   “kind”: “app,linux,container”,
   “location”: “[resourceGroup().location]”,
   “tags”: {
     “[concat(‘hidden-related:’, resourceGroup().id, ‘/providers/Microsoft.Web/serverfarms/’, parameters(‘hostingPlanName’))]”: “Resource”,
     “displayName”: “Website”
   },
   “dependsOn”: [
     “[resourceId(‘Microsoft.Web/serverfarms/’, parameters(‘hostingPlanName’))]”
   ],
   “properties”: {
     “name”: “[variables(‘webSiteName’)]”,
     “serverFarmId”: “[resourceId(‘Microsoft.Web/serverfarms’, parameters(‘hostingPlanName’))]”,
     “siteConfig”: {
       “DOCKER_CUSTOM_IMAGE_NAME”: “containerwebapp”
     }
   }
},

Again, the full source is available over att GitHub (see link at top)

Azure Pipeline

Let’s create a deployment pipeline that will build and push the image, and then deploy the ARM template and finally the web app container.

First up is the build definition, here I’m using YAML since it let’s me store the build definition in source control together with the rest of the application:

NB: You need to change the azureSubscriptionEndpoint and azureContainerRegistry to the name of your service endpoint and Azure container registry


azure-pipelines.yml

name: 1.0$(rev:.r)

trigger:

  – master

pool:

  vmImage: ‘Ubuntu-16.04’

steps:

  – task: Docker@1

    displayName: ‘Build image’

    inputs:

      azureSubscriptionEndpoint: ‘Azure Sponsorship’

      azureContainerRegistry: jakob.azurecr.io

      dockerFile: ContainerWebApp/Dockerfile

      useDefaultContext: false

      imageName: ‘containerwebapp:$(Build.BuildNumber)’

  – task: Docker@1

    displayName: ‘Push image’

    inputs:

      azureSubscriptionEndpoint: ‘Azure Sponsorship’

      azureContainerRegistry: jakob.azurecr.io

      command: ‘Push an image’

      imageName: ‘containerwebapp:$(Build.BuildNumber)’

  – task: PublishBuildArtifacts@1

    displayName: ‘Publish ARM template’

    inputs:

      PathtoPublish: ‘ContainerWebApp.ResourceGroup’

      ArtifactName: template

The build definition performs the following steps:

  1. Build the container image using the Docker task, where we point to the Dockerfile and give it an imagename
  2. Pushes the container image to Azure Container Registry
  3. Publishes the content of the Azure resource group project back to Azure Pipelines. This will be used when we deploy the resource group in the release definition

Running this buid should push an image to your container registry.

Now we will create a release definition that deployes the resource group and then the container web app.

First up is the resource group deployment. Here we use the Azure Resource Group Deployment task, where we point to the ARM template json file and the parameters file. We also override the name of the app hosting plan since that is an input parameter to the template.

image

Then we use the Azure App Service Deployment task to deploy the container to the web app. Note that we are using the preview 4.* version since that has support for deploying to Web App for Containers.

image

In the rest of the parameters for this task we specify the name of the container registry, the name of the image and the specific tag that we want to deploy. The tag is fetched from the build number of the associated build.

Finally we set the following app settings:

  • DOCKER_REGISTRY_SERVER_URL:                 The URL to the Docker registry
  • DOCKER_REGISTRY_SERVER_USERNAME:   The login to the registry. For ACR, this is the name of the registry
  • DOCKER_REGISTRY_SERVER_PASSWORD:   The password to the registry. For ACR, you can get this in the Access Keys blade in the Azure portal

image

That’s it. Running the release deployes the resource group (will take 1-2 minutes the first time) and then the container to the web app. Once done, you can browse the site and verify that it works as expected:

image

Microsoft Announces Next Generation of Visual Studio Release Management

Today at the Microsoft Connect() event, Microsoft announced the public preview of the brand new version of Visual Studio Release Management. The public preview is available on Visual Studio Team Services (a.k.a. Visual Studio Online, in case you missed that announcement! :-)), and will debut on premise later in 2016.

 

So, what’s this new version about? Let’s summarize some of the major features about it:

Web Based

The existing version of Visual Studio Release Management, that was originally acquired from InCycle back in 2013, uses a standalone WPF client for authoring, triggering and tracking releases. It always felt a bit awkward and wasn’t really integrated with the rest of TFS. The new version is completely rewritten to be a web based experience and is part of the web access, as a new “Release” tab.

image

From this hub you can author release definitions, manage approval workflows and trigger and track releases.

 

Shared Infrastructure with TFS Build

With the new build system in TFS 2015, Microsoft already has a great automation platform that is scriptable, cross platform and easy to deploy and configure. So it makes sense that the new version of Visual Studio Release Management is build upon the same platform. So the build agent that is used for running builds can also be used for executing releases.

It also means that all the new build tasks that are available in TFS Build 2015 and also be used as part of a release pipeline.

image

 

Cross Platform Support

As mentioned above, since the same agent is used for releases, it means that we can also run them on Linux and OS/X since these are supported platforms. There are many tasks out of the box for doing cross platform deployment, including Chef and Docker.

image

 

Track Releases across Environments

The new web UI makes it easy to get an overview of the status of your existing environments, and which version of which application that is currently deployed. In the example below we can see that the new release of the “QuizBox” application has been deployed to Dev and QA, has gone through automated and manual acceptance tests, and is currently being deployed to the staging slot of the production environment.

image

 

Configuration Management

One of the biggest challenges with doing staged deployments is the configuration management. The different environment often have different configuration settings, things like connection strings, account names and passwords. In Visual Studio Release Management vNext these configuration variables can be authored either on the environment level or on the release definition level, where it applies to all environments.

We can easily compare the configuration variables across our environments, as shown below.

image

 

Live Release Log Output

As with the new build system in TFS 2015, VSRM vNext gives you excellent real time logging from the release agent, as the release is executing.

image

 

Release Approval

Every environment in the release pipeline can trigger approvals, either before the deployment starts or after. For example, before we want to deploy a new version of an application to the QA environment, the QA team should be able to approve it to make sure that the environment is ready.

Below you can see a release that has a pending approval. Every approver that should take action will receive a notification email with a link to this page.

image

 

Do you want to learn more?

For the last 6 months, me and my fellow ALM MVP and good friend Mathias Olausson have been busy working on a book that covers among other things this new version of Visual Studio Release Management. The title of the book is Continuous Delivery with Visual Studio ALM 2015, and covers how the process of continuous delivery can be implemented using the Visual Studio 2015 ALM tool suite.

I will write a separate blog post about the book, but here is the description from Amazon:


This book is the authoritative source on implementing Continuous Delivery practices using Microsoft’s Visual Studio and TFS 2015. Microsoft MVP authors Mathias Olausson and Jakob Ehn translate the theory behind this methodology and show step by step how to implement Continuous Delivery in a real world environment.

Building good software is challenging. Building high-quality software on a tight schedule can be close to impossible. Continuous Delivery is an agile and iterative technique that enables developers to deliver solid, working software in every iteration. Continuous delivery practices help IT organizations reduce risk and potentially become as nimble, agile, and innovative as startups.

In this book, you’ll learn:

  • What Continuous Delivery is and how to use it to create better software more efficiently using Visual Studio 2015
  • How to use Team Foundation Server 2015 and Visual Studio Online to plan, design, and implement powerful and reliable deployment pipelines
  • Detailed step-by-step instructions for implementing Continuous Delivery on a real project

 

You can find the book at  http://www.amazon.com/Continuous-Delivery-Visual-Studio-2015/dp/1484212738.

We hope that you will find it valuable!

AssociateRecentWorkItems Extension available for Visual Studio 2015

I finally got around to upgrading the Inmeta AssociateRecentWorkItems extension to support Visual Studio 2015. Several people have contacted me about this, sorry that it took so long!

 

About the extension

This extension makes it easy to associate multiple checkins with the same work items, as it shows a list of the recently associated work items from which you can easily associate one with the current pending changes.

Osiris AssociateRecentWorkItems VS2015

Associating work items

 

Note about the rebranding

At the same time, I rebranded this extension to use the Osiris brand. Osiris being the company that I originally worked, before it was acquired by Inmeta. Now I don’t work for Inmeta anymore, so me and my former colleague Terje Sandström decided to bring back the Osiris name and created a GitHub account for it called OsirisOS (OS for open source then…).

It’s available over at https://github.com/OsirisOS and currently contain one repo for the AssociateRecentWorkItems extension. Hopefully we will add more cool and useful projects around Visual Studio ALM here, if you are interested in contributing, let us know!

@OsirisOS

The Osiris logotype

So, the extension is now called Osiris AssociateRecentWorkItems and is available at https://visualstudiogallery.msdn.microsoft.com/3fa82205-e0f0-4874-a38b-023435fa2802

Hope that you will find it useful. I do plan to add support for Git, when I get around to it! Please poke me if you want it done sooner than later.. 🙂