Event-driven scripting in Kubernetes with Brigade

In most projects that I’ve been part of, sooner or later the need for various types of automation jobs arises. For example cleaning up old files, moving database backups, running health checks or system tests and so on.

Historically we’ve implemented these tasks using for example the Windows task scheduler, or through some custom Windows Service app. More recently, we’ve been using Azure Automation jobs for this. Sometimes it can also make sense to use CI/CD automation tools like Azure DevOps for these jobs.

With the move to containers and Kubernetes, it can make a lot of sense to use that platform not just for the business apps that you are developing, but also for these type of automation workloads. It means that you don’t have to invest and manage another platform, and you can leverage existing and 3rd part container images to build automation workflows.

Brigade

Brigade is a platform that makes it easy to create simple or complex workflows that run on Kubernetes. You use Docker containers as the basis for each step in the workflow, and wire them together using Javascript.

Brigade

Brigade is an open-source project, read more about it at:
https://brigade.sh/

Brigade runs on any vanilla Kubernetes cluster,  you don’t need anything extra installed to run brigade pipelines.

Installing Brigade is as easy as running the following two commands:

helm repo add brigade https://brigadecore.github.io/charts
helm install brigade/brigade --name brigade-server

The image below shows the main concepts in use by Brigade:image

Project
For every automation workflow that you want to implement, you will create a project. Every project has some metadata attached to it, such as id, name and so on. It also either contains or reference the Javascript code that contains the pipeline logic.

Build
A build is created every time a script is triggered, through some  external event. The build runs until all jobs are finished, and you can view the output logs from the running build as well as after it finished.

Job
Each build will contain one or more jobs. For each job, a container instance is started, and then a series of tasks is executed inside that container. You specify the jobs and the tasks in the Javascript code, and how the jobs should be scheduled.

Gateway
A gateway transform outside triggers (a Git pull request, a Trello card move etc) into events, that is passed into the pipeline where you will handle them in your code.

Brigade comes with a Generic gateway that listens and accepts POST JSON messages on any format (it also explicitly supports the CloudEvents format). In addition, there are several custom gateways that makes integration a lot easier with services such as GitHub, Docker Container Registry or Azure Event Grid.

A basic “hello-world” type of Brigade pipeline can look like this:

const { events, Job } = require("brigadier");

//Handler for exec event
events.on("exec", () => {

  var job = new Job("say-hello", "alpine:3.8");
  job.tasks = [
    "echo Hello",
    "echo World"
  ];

  job.run();
  
});

Here, the pipeline is triggered by the exec event, and inside that event handler it starts a new job called “say-hello” which contains two tasks where each task just prints a message. The job is executed inside a container from the alpine:3.8 image, that will be downloaded from Dockerhub and started automatically for you. Of course you can use any public image, or a private image from your own container registry.

Brigade has excellent documentation, I encourage you to read up on it more at https://docs.brigade.sh/

In this post I will show a slightly more complex example, that is taken from a recent customer project where we developed a microservice application running on Kubernetes, and found the need for some extra automation.

Removing Kubernetes environment on PR completion

Kubernetes makes it easy to create new isolated environments for your application when you need to. A common desire of many teams is to deploy the application into a fresh environment every time a pull request is created. This lets the team and stakeholders test and verify the feature that is being developed, before it gets merged into the master branch.

Using Azure DevOps, it’s quite easy to setup a release pipeline where every PR is deployed into a new namespace in Kubernetes. You can enable stages in a pipeline to be triggered by pull requests, and then use information from that PR to create a new namespace in your Kubernetes cluster and then deploy the app into that namespace.

The problem we experienced recently at a customer with this was, how can we make sure this namespace (and everything in it) is removed once the PR is complete and merged? We can’t keep it around since that will consume all the resources eventually in the cluster, and we don’t want to rely on cleaning this up manually.

This turned out to be a perfect case for Brigade. We can configure a service hook in Azure DevOps, so that every time a PR is updated we trigger a Brigade pipeline. In the pipeline we check if the PR was completed and if so, extract the relevant information from the PR and then clean up the corresponding namespace. To do this, we used existing container images that let us run helm and kubecl commands.

The Brigade script looks like this:

const { events, Job } = require("brigadier");
const util = require('util')

const HELM_VERSION = "v2.13.0"
const HELM_CONTAINER = "lachlanevenson/k8s-helm:" + HELM_VERSION;

const KUBECTL_VERSION = "v1.12.8";
const KUBECTL_CONTAINER = "lachlanevenson/k8s-kubectl:" + KUBECTL_VERSION;

events.on("simpleevent", (event, project) => {
    const payload = JSON.parse(event.payload);
    const prId = payload.resource.pullRequestId;

    if (!payload.resource.sourceRefName.includes('/feature/') && !payload.resource.sourceRefName.includes('/bug/')) {
        console.log(`The source branch ${payload.resource.sourceRefName} is not a /feature/ or /bug/ and is therefore skipped.`)
        return;
    }

    if (payload.resource.status !== "completed" && payload.resource.status !== "abandoned") {
        console.log(`PullRequest not complete or abandoned (current status: ${payload.resource.status}).`);
        return;
    }

    var helm_job = new Job("helm-delete-release", HELM_CONTAINER);
    helm_job.env = {
        'HELM_HOST': "10.0.119.135:44134"
    };
    helm_job.tasks = ["helm init --client-only", `helm delete --purge samplewebapp-${prId}`];

    var kubectl_job = new Job("kubectl-delete-ns", KUBECTL_CONTAINER);
    kubectl_job.tasks = [`kubectl delete namespace samplewebapp-${prId}`];

    console.log("==> Running helm_job Job")
    helm_job.run().then(helmResult => {
        console.log(helmResult.toString())

        kubectl_job.run().then(kubectlResult => {
            console.log(kubectlResult.toString());
        });
    })
});

events.on("error", (e) => {
    console.log("Error event " + util.inspect(e, false, null))
    console.log("==> Event " + e.type + " caused by " + e.provider + " cause class" + e.cause + e.cause.reason)
})

events.on("after", (e) => {
    console.log("After event fired " + util.inspect(e, false, null))
});
 

This code is triggered when the “simpleevent” event is triggered. This event is handled by the generic gateway in Brigade, and can be used to send any kind of information (as a json document) to your pipeline. To trigger this event, we configure a service hook in Azure DevOps for the Pull Request updated event, and point it to the generic gateway:

SNAGHTMLae0651d[4]

The full URL looks like this:

https://brigadedemo.ehn.nu/simpleevents/v1/brigade-55cbf57f7aaeb59afa1fe4d33ca6a5a635eefe060b057c423c97a0/somesecret

The URL contains the project id and the secret that were specified when creating the project. This is how external requests is authenticated and routed to the correct brigade script.

Inside the event handler we use two different container images, the first one is for running a Helm command to delete the Kubernetes deployment. Since Helm can’t delete the namespace, we need to run a second job inside another container image that contains the Kubectl tool, where we can delete the namespace by running

kubectl delete namespace samplewebapp-${prId}`

The prId variable is parsed from the PullRequest updated event coming from Azure DevOps. We use the id of the pull request to create a unique namespace (in this case pull request with id 99 will be deployed into the samplewebapp-99 namespace).

NB: You will need to make sure that the service account for brigade have enough permission to delete the namespace. Namespaces are a cluster level resource, so it requires a higher permission compared to deleting a deployment inside a namespace. 

One easy way to do this is to assign a cluster-admin role to the brigade service account, this is not recommended for production though.

Now, when a PR is complete, our pipeline is triggered and it will delete the deployment and then the namespace.

To view the running jobs and their output, you can either use the brigade dashboard (called Kashti) by running brig dashboard or you can install the brigade terminal which will give you a similar UI but inside your favourite console.

Here is the output from the PR job in the brigade terminal:

image

It shows that two jobs were executed in this build, and you can see the images that were used and the id of each job. To see the output of each job, just return into each job:

image

Here you can see the the output of the helm job that deletes my helm deployment for the corresponding pull request.

Summary

I encourage you to take a look at Brigade, it’s easy to get started with and you can implement all sorts of automation without having to resort to other platforms and services. And although Javascript might put some people off, the power of a real programming language (compared to some DSL language) pays off when you want to implemtent something non-trivial.

If you already are using Kubernetes, why not use it for more things than your apps!

Thanks to my colleague Tobias Lolax (https://twitter.com/Tobibben) who did the original implementation of this for our customer.

Running Windows Container Build Agents for Azure Pipelines

In a previous post I talked about how to create a build environment, including an Azure DevOps build agent, using Docker and Windows Containers. Using Dockerfiles, we can specify everything that we need in order to build and test our projects. Docker gives us Infrastructure as Code (no more snowflake build servers) and isolation which makes it easy to spin up multiple agents quickly on one or more machines without interfering with each other.

What I didn’t talk about in that post is to actually depoy and run the Windows containers in a production environment. I showed how to start the agent using docker run, but for running build agents for production workloads, you need something more stable and maintainable. There are also some additional aspects that you will need to handle when running build agents in containers.

For hosting and orchestrating Windows containers there are a few different options:

  • Using Docker Compose
  • Docker Swarm
  • Kubernetes (which recently announced General Availability for running Windows Containers)

In this post I will show how to use Docker Compose to run the builds agents. In an upcoming post, I will use Azure Kubernetes services to run Windows container builds agents on multiple machines in the cloud (Support for Windows containers is currently in preview: https://docs.microsoft.com/en-us/azure/aks/windows-container-cli).

In addition to selecting the container hosting, there are some details that we want to get right:

  • Externalize build agent working directory
    We want to make sure that the working directory of the build agents is mapped to outside of the container. Otherwise we will loose all state when an agent is restarted, making all subsequent builds slower

  • Enable “Docker in docker”
    Of course we want our build agent to be able to build Dockerfiles. While it is technically possible to install and run Docker engine inside a Docker container, it is not recommended. Instead, we install the Docker CLI in the container and use Named Pipes to bind the Docker API from the host. That means that all containers running on the host will share the same Docker engine. An advantage of this is that they will all benefit from the Docker image and build cache, improving build times overall, and reducing the amount of disk space needed

  • Identity
    When accessing resources outside the container, the build agent will almost always need to authenticate against that resource. This could be for example a custom NuGet feed, or a network share. A Windows container can’t be domain joined, but we can use group Managed Service Accounts (gMSA) which is a special type of service account introduced in Windows Server 2012 designed to allow multiple computers to share an identity without needing to know its password.

    You can follow this post from Microsoft on how to create and use group Managed Service Accounts for Windows containers:
    https://docs.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/manage-serviceaccounts

    This post assumes that you have created a gMSA called msa_BuildAgent .

Docker Compose

Docker compose makes it easy to start and stop multiple containers on a single host. All information is defined in a docker-compose.yml file, and then we can start everything using a simple docker-compose up command, and then docker-compose down to stop all containers, tearing down networks and so on.

We need to send in multiple parameters when starting the build agent containers, and to avoid making the docker-compose file too complex, we can extract all parameters to an external file. This also makes it easy to tokenize it when we run this from an automated process.


docker-compose.yml

version: ‘2.4’
services:
  agent1:
     image: ${IMAGE}:${VERSION}
     volumes:
       – type: npipe
         source: \\.\pipe\docker_engine
         target: \\.\pipe\docker_engine       
       – type: bind
         source: d:\w\${WORKFOLDERNAME}1
         target: c:\setup\_work
     env_file: .env
     environment:
       TFS_AGENT_NAME: ${AGENTNAME}-1
     restart: always
  agent2:
     image: ${IMAGE}:${VERSION}
     volumes:
       – type: npipe
         source: \\.\pipe\docker_engine
         target: \\.\pipe\docker_engine       
       – type: bind      
         source: d:\w\${WORKFOLDERNAME}2
         target: c:\agent\_work
     env_file: .env
     environment:
       TFS_AGENT_NAME: ${AGENTNAME}-2
     restart: always           

As you can see, this file defines two containers (agent1 and agent2), you can easily add more here if you want to.

Some comments on this file:

  • To enable “Docker in Docker”, we use the volume mapping of type npipe, which stands for named pipes. This binds to the Docker API running on the host
  • An addition volume is defined that maps c:\agent\_work to the defined path on the container host
  • We specify restart: always to make sure that these containers are restarted in case the build server is restarted

All values for the variables will be taken from an environment file (the env_file argument), that looks like this:

.env (env_file)

TFS_URL=<ORGANIZATIONURL>
TFS_PAT=<PERSONALACCESSTOKEN>
TFS_POOL_NAME=<AGENTPOOLNAME>
IMAGE=<BUILAGENTIMAGENAME>
VERSION=<BUILDAGENTIMAGETAG>
AGENTNAME=<CONTAINERNAME>
WORKFOLDERNAME=<WORKFOLDERNAME>
CREDENTIALSPEC=file://msa_BuildAgent.json

This file is placed in the same folder as the docker-compose.yml file.

Most of these parameters were covered in the previous post, the new ones here though are:

  • WORKFOLDERNAME
    This is the path on the container host where the working directory should be mapped to. Internally in the container, the work directory in the agent is set to c:\agent\_work

  • CREDENTIALSPEC
    This is the name of the credential specification file that you created if you followed the post that I linked to above, when creating the group Managed Service Account. That file is placed in the c:\ProgramData\Docker\CredentialSpec folder on your host

To start these build agents you simply run the following command in the same directory where you places the docker-compose.yml and the .env files:

docker-compose up –d

When you run this command, you will see something like:

Creating network “build_default” with the default driver
Creating build_agent1_1 …
Creating build_agent2_1 …
Creating build_agent1_1 … done
Creating build_agent2_1 … done

To stop all the containers, including tearing down the network that was created you run :

docker-compose down

Automating the process

The process of deploying and updating builds agent containers on a server should of course be automated. So we need something that runs on our build servers that can pull the build agent container images from a container registry, and then start the agents on that machine.

One way to do this with Azure DevOps is to use Deployment Groups, which let you run deployments on multiple machines either sequentially or in parallell. 

Here is an image that shows what this could look like:

image

Here I have two build servers running Windows Server 2019 Core. The only things that are installed on these servers are Docker, Docker Compose and a Deployment Group agent. The deployment group agent will be used to stop the build agent containers, pull a new verison of the build agent image and then start them up again.

Here is the deployment process in Azure Pipelines:

image

The process work like this:

  1. The image version is updating by modifying the .env file that we defined before with the build number of the current build

  2. We run Docker login to authenticate to the container registry where we have the build agent container image. In this case we are using Azure Container Reigstry, but any registry will do

  3. The new version of the image is then pulled from the registry. This can take a while (Windows Containers are big) but usually only a few small layers need to be pulled after you have pulled the initial image the first time

  4. When we have the new image locally, we shut down the agents by running docker-compose down

  5. And finally, we start the agents up again by running docker-compose up –d

Deployment groups are powerful in that they let you specify how to roll out new deployments oacross multiple servers.

If you do not want to restart all of your build agents at the same time, you can specify thise in the settings of the deployment group job:

image

Note: One thing that is not handled by this process is graceful shutdown, e.g. if a build is currently running it will be stopped when shutting down the agents. It would be fully possible to utilize the Azure Pipelines API to first disable all agents (to prevent new builds from starting) and then wat until any currently running builds have finished, before shutting them down. I just haven’t done that yet Smile

Hopefully this post was helpful if you want to run Windoes Continaer build agents for Azure Pipelines on your servers!

Accessing Azure Artifacts feed in a Docker build

I’ve recently given talks at conferences and user groups on the topic of using Docker as a build engine, describing the builds using a Dockerfile. This has several advantages, such as fully consistent build no matter where you run it, no dependencies necessary except Docker.

Image result for docker

Some things become a bit tricker though, I’ve blogged previously about how to run unit tests in a Docker build, including getting the test results out of the build container afterwards.

Another thing that you will soon hit if you start with Dockerfile builds, is how to restore packages from an authenticated NuGet feed, such as Azure Artifacts. The reason this is problematic is that the build will run inside a docker container, as a Docker user that can’t authenticate to anything by default. If you build a projects that references a package located in an Azure Artifacts feed, you’ll get an error like this:

Step 4/15 : RUN dotnet restore -s “https://pkgs.dev.azure.com/jakob/_packaging/DockerBuilds/nuget/v3/index.json” -s “https://api.nuget.org/v3/index.json” “WebApplication1/WebApplication1.csproj”
—> Running in 7071b05e2065
/usr/share/dotnet/sdk/2.2.202/NuGet.targets(119,5): error : Unable to load the service index for source
https://pkgs.dev.azure.com/jakob/_packaging/DockerBuilds/nuget/v3/index.json. [/src/WebApplication1/WebApplication1.csproj]
/usr/share/dotnet/sdk/2.2.202/NuGet.targets(119,5): error :   Response status code does not indicate success: 401 (Unauthorized). [/src/WebApplication1/WebApplication1.csproj]
The command ‘/bin/sh -c dotnet restore -s “
https://pkgs.dev.azure.com/jakob/_packaging/DockerBuilds/nuget/v3/index.json” -s “https://api.nuget.org/v3/index.json” “WebApplication1/WebApplication1.csproj”‘ returned a non-zero code: 1

The output log above shows a 401 (Unauthorized) when we run a dotnet restore command.

Using the Azure Artifacts Credential Provider in a Dockerfile

Image result for azure artifacts

To solve this, Microsoft supplies a credential provider for Azure Artifacts, that you can find here https://github.com/microsoft/artifacts-credprovider

NuGet wil look for installed credential providers and, depending on context, either prompt the user for credentials and store it in the credential manager of the current OS, or for CI scenarios we need to pass in the necessary informtion and the credential provider will then automatically do the authentication.

To use the credential provider in a Dockerfile build, you need to download and configure it, and also be sure to specify the feed when you restore your projects. Here is snippet from a Dockerfile that does just this:

NB: The full source code is available here https://dev.azure.com/jakob/dockerbuilds/_git/DockerBuilds?path=%2F4.%20NugetRestore&version=GBmaster

# Install Credential Provider and set env variables to enable Nuget restore with auth

ARG PAT
RUN wget -qO- https://raw.githubusercontent.com/Microsoft/artifacts-credprovider/master/helpers/installcredprovider.sh | bash
ENV NUGET_CREDENTIALPROVIDER_SESSIONTOKENCACHE_ENABLED true
ENV VSS_NUGET_EXTERNAL_FEED_ENDPOINTS “{\”endpointCredentials\”: [{\”endpoint\”:\”https://pkgs.dev.azure.com/jakob/_packaging/DockerBuilds/nuget/v3/index.json\”, \”password\”:\”${PAT}\”}]}”

# Restore packages using authenticated feed
COPY [“WebApplication1/WebApplication1.csproj”, “WebApplication1/”]
RUN dotnet restore -s “https://pkgs.dev.azure.com/jakob/_packaging/DockerBuilds/nuget/v3/index.json” -s “https://api.nuget.org/v3/index.json” “WebApplication1/WebApplication1.csproj”

The  VSS_NUGET_EXTERNAL_FEED_ENDPOINTS  is an environment variable that should contain the endpoint credentials for any feed that you need to authenticate against, in a JSON Format. The personal access token is sent to the Dockerfile build using an argument called PAT.

To build this, create a Personal Access Token in your Azure DevOps account, with permissions to read your feeds, then run the following command:

docker build -f WebApplication1\Dockerfile -t meetup/demo4 . –build-arg PAT=<token>

You should now see the restore complete successfully

Creating a Windows Container Build Agent for Azure Pipelines

Having automated builds that are stable and predictable is so important in order to succeed with CI/CD. One important practice to enable this is to have a fully scriptable build environment that lets you deploy multiple, identical, build envionment hosts. This can be done by using image tooling such as Packer from HahsiCorp. Another option is to use Docker which is what I am using in this post.

Using Docker will will crete a Dockerfile that specifies the content of the image in which builds will run. This image should contain the SDK’s and tooling necessary to build and test your projects. It will also contain the build agent for your favourite CI server that will let you spin up a new agent in seconds using the docker image.

 

In this post I will walk you through how to create a Windows container image for Azure Pipelines/Azure DevOps Server that contains the necessary build tools for building .NET Framework and .NET Core projects.

I am using Windows containers here because I want to be able to build full .NET Framework projects (in addition to .NET core of course). If you only use .NET Core things are much simpler, there is even an existing Docker image from Microsoft thath contains the build agent here: https://hub.docker.com/r/microsoft/vsts-agent/

 

All files referred to in this blog post are available over at GitHub:
https://github.com/jakobehn/WindowsContainerBuildImage

 

Prerequisites:

You need to have Docker Desktop install on your machine to build the image.

I also recommend using Visual Studio Code with the Docker extension installed for authoring Dockerfiles (see https://code.visualstudio.com/docs/azure/docker)

Specifying the base image

All Docker images must inherit from a base image. In this case we will start with one of the images from Microsoft that ships with the full .NET Framework  SDK, microsoft/dotnet-framework.

If you have the Docker extension in VS Code installed, you can browse existing images and tags directly from the editor:

image

I’m going to use the image with .NET Framework 4.7.2 SDK installed running in Windows Server Core:

image

Installing Visual Studio Build Tools

In order to build .NET Framework apps we need to have the proper build tools installed. Installing Visual Studio in a Docker container is possible but not recommended. Instead we can install Visual Studio Build Tools, and select wich components to install.

To understand which components that are available and which identifer they have, this page is very userful. It contains all available components that you can install in Visual Studio Build Tools 2017:
https://docs.microsoft.com/en-us/visualstudio/install/workload-component-id-vs-build-tools?view=vs-2017

In the lines shown below, I’m first downloading and installing Visual Studio Log Collection tool (vscollect) that let’s us capture the installation log. Then we download the build tools from the Visual Studio 2017 release channel feed.

Finally we are instaling the build tools in quiet mode,specifying the desired components. Of course you might wamt to change this list to fit your needs.

image

Installing additional tooling

You will most likely want to install additional tooling, besides the standard VS build tools. In my case, I want to install Node, the latest version of NET Core SDK and also web deploy. Many of these things can be installed easily using chocolatey, as shown below:

image

Installing .NET Core SDK can be done by simply downloading it and extract it and update the PATH environment variable:

image

Installing and configuring the Azure Pipelines Build Agent

Finally we want to installl the Azure Pipelines build agent and configure it. Installing the agent will be done when we are building the Docker image. Configuring it against your Azure DevOps organization must be done when starting the image, which means will do this in the CMD part of the Dockerfile, and supply the necessary parameters.

image

The InstallAgent.ps1 script simply extracts the downloaded agent :

image

ConfigureAgent.ps1 will be executed when the container is started, and here we are using the unattended install option for the Azure Pipelines agent to configure it against an Azure DevOps organization:

image

Building the Docker image

To build the image from the Dockerfile, run the following command:

docker build -t mybuildagent:1.0 -m 8GB .

I’m allocating 8GB of memory here to make sure the installation process won’t be too slow. In particular installing the build tools is pretty slow (around 20 minutes on my machine) and I’ve found that allocating more memory speeds it up a bit. As always, Docker caches all image layers so if you make a change to the Docker file, the build will go much faster the next time (unless you change the command that installs the build tools Smile

When the build is done you can run docker images to see your image.

Running the build agent

To start the image and connect it to your Azure DevOps organization, run the following command:

docker run -d -m 4GB –name <NAME> –storage-opt “size=100GB” -e TFS_URL=<ORGANIZATIONURL>-e TFS_PAT=<PAT> -e TFS_POOL_NAME=<POOL> -e TFS_AGENT_NAME=<NAME> mybuildagent:1.0

Replace the parameters in the above string:

  • NAME
    Name of the builds agent as it is registered in the build pool in Azure DevOps. Also the docker container will use the same name, which can be handy whe you are running multiple agents on the same host
  • ORGANIZATIONURL
    URL to your Azure DevOps account, e.g. https://dev.azure.com/contoso
  • PAT
    A personal access token that you need to crete in your Azure DevOps organization Make sure that the token has the AgentPools (read, manage) scope enabled
  • POOL
    The name of the agent pool in Azure DevOps that the agent should register in

When you run the agent from command line you will see the id of the started Docker container. For troubleshooting you can run docker logs <id> to see the output from the build agent running in the container

image

After around 30 seconds or so, you should see the agent appear in the list of available agents in your agent pool:

image

Happy building!

Running .NET Core Unit Tests with Docker and Azure Pipelines

Using Docker for compiling your code is great since that guarantees a consistent behaviour regardless of where you are building your code. No matter if it’s on the local dev machine or on a build server somewhere. It also reduces the need of installing any dependencies just to make the code compile. The only thing that you need to install is Docker!

When you create a ASP.NET Core project in Visual Studio and add Docker support for it you will get a Docker file that looks something like this:

FROM microsoft/dotnet:2.1-aspnetcore-runtime AS base 
WORKDIR /app 
EXPOSE 80 
EXPOSE 443 

FROM microsoft/dotnet:2.1-sdk AS build 
WORKDIR /src 
COPY ["WebApplication1/WebApplication1.csproj", "WebApplication1/"] 
RUN dotnet restore "WebApplication1/WebApplication1.csproj" 
COPY . . 
WORKDIR "/src/WebApplication1" 
RUN dotnet build "WebApplication1.csproj" -c Release -o /app 

FROM build AS publish 
RUN dotnet publish "WebApplication1.csproj" -c Release -o /app 

FROM base AS final 
WORKDIR /app 
COPY --from=publish /app . 
ENTRYPOINT ["dotnet", "WebApplication1.dll"]

This is an example of a multistage Docker build. The first stage is based on the .NET Core SDK Docker image in which the code is restored, built and published. The second phase uses the smaller .NET Core runtime Docker image, to which the generated artifacts from the first phase is copied into.

The result is a smaller Docker image that will be pushed to a Docker registry and later on deployed to test- and production environments. Smaller images means faster download and startup times. Since it doesn’t contain as many SDKs etc, it also means that  the surface area for security holes is much smaller.

Now, this will compile just fine locally, and settting a build definition in Azure Pipelines is easy-peasy. Using the default Docker container build pipeline template, results in a build like this:

image

But, we want to run unit tests also, and then publish the test results back to Azure DevOps. How can we do this?

Run Unit Tests in Docker

First of all we need to build and run the tests inside the container, so we need to extend the Docker file. In this sample, I have added a XUnit test project called WebApplication1.UnitTests.

FROM microsoft/dotnet:2.1-aspnetcore-runtime AS base 
WORKDIR /app 
EXPOSE 80 
EXPOSE 443

FROM microsoft/dotnet:2.1-sdk AS build 
WORKDIR /src 
COPY ["WebApplication1/WebApplication1.csproj", "WebApplication1/"] 
COPY ["WebApplication1.UnitTests/WebApplication1.UnitTests.csproj", "WebApplication1.UnitTests/"] 
RUN dotnet restore "WebApplication1/WebApplication1.csproj" 
RUN dotnet restore "WebApplication1.UnitTests/WebApplication1.UnitTests.csproj" 
COPY . . 
RUN dotnet build "WebApplication1/WebApplication1.csproj" -c Release -o /app 
RUN dotnet build "WebApplication1.UnitTests/WebApplication1.UnitTests.csproj" -c Release -o /app 

RUN dotnet test "WebApplication1.UnitTests/WebApplication1.UnitTests.csproj" --logger "trx;LogFileName=webapplication1.trx" 

FROM build AS publish 
RUN dotnet publish "WebApplication1.csproj" -c Release -o /app 

FROM base AS final 
WORKDIR /app 
COPY --from=publish /app . 
ENTRYPOINT ["dotnet", "WebApplication1.dll"]

Now we are also restoring and compiling the test project, and then we run dotnet test to run the unit tests. To be able to publish the unit test results to Azure DevOps, we are using the –logger parameter which instructs dotnet to output a TRX file.

Now comes the tricky part. When we run these tests as part of a build, the results end up inside the container. To publish the test results we need to access the results from outside the container. Docker volumes will not help us here, since we aren’t running the container, we are building it. Docker volumes are not supported when building a container.

Instead we will add another task to our build definition that will use scripts to build the image, including running the unit tests, and the copiying the test results file from the container to a folder on the build server. We use the Docker Copy command to do this:

docker build -f ./WebApplication1/Dockerfile --target build -t webapplication1:$(build.buildid) . 
docker create -ti --name testcontainer webapplication1:$(build.buildid) 
docker cp testcontainer:/src/WebApplication1.UnitTests/TestResults/ $(Build.ArtifactStagingDirectory)/testresults 
docker rm -fv testcontainer

Here we first build the image by using docker build. By using the –target parameter it will only execute the first phase of the build (there is no meaning to continue if the tests are failing). To access the file inside the container, we use docker create which is a way to create and configure a container before actually starting it. In this case we don’t need to start it, just use docker cp to extract the test result files to the host.

Now we will have the TRX test results file in the artifact folder on the build server, which means we can just add a Publish Test Results task to our build definition:

image

And voila, running the build now runs the unit tests and we can see the test results in the build summary as expected:

image