Creating a Windows Container Build Agent for Azure Pipelines

Having automated builds that are stable and predictable is so important in order to succeed with CI/CD. One important practice to enable this is to have a fully scriptable build environment that lets you deploy multiple, identical, build envionment hosts. This can be done by using image tooling such as Packer from HahsiCorp. Another option is to use Docker which is what I am using in this post.

Using Docker will will crete a Dockerfile that specifies the content of the image in which builds will run. This image should contain the SDK’s and tooling necessary to build and test your projects. It will also contain the build agent for your favourite CI server that will let you spin up a new agent in seconds using the docker image.

 

In this post I will walk you through how to create a Windows container image for Azure Pipelines/Azure DevOps Server that contains the necessary build tools for building .NET Framework and .NET Core projects.

I am using Windows containers here because I want to be able to build full .NET Framework projects (in addition to .NET core of course). If you only use .NET Core things are much simpler, there is even an existing Docker image from Microsoft thath contains the build agent here: https://hub.docker.com/r/microsoft/vsts-agent/

 

All files referred to in this blog post are available over at GitHub:
https://github.com/jakobehn/WindowsContainerBuildImage

 

Prerequisites:

You need to have Docker Desktop install on your machine to build the image.

I also recommend using Visual Studio Code with the Docker extension installed for authoring Dockerfiles (see https://code.visualstudio.com/docs/azure/docker)

Specifying the base image

All Docker images must inherit from a base image. In this case we will start with one of the images from Microsoft that ships with the full .NET Framework  SDK, microsoft/dotnet-framework.

If you have the Docker extension in VS Code installed, you can browse existing images and tags directly from the editor:

image

I’m going to use the image with .NET Framework 4.7.2 SDK installed running in Windows Server Core:

image

Installing Visual Studio Build Tools

In order to build .NET Framework apps we need to have the proper build tools installed. Installing Visual Studio in a Docker container is possible but not recommended. Instead we can install Visual Studio Build Tools, and select wich components to install.

To understand which components that are available and which identifer they have, this page is very userful. It contains all available components that you can install in Visual Studio Build Tools 2017:
https://docs.microsoft.com/en-us/visualstudio/install/workload-component-id-vs-build-tools?view=vs-2017

In the lines shown below, I’m first downloading and installing Visual Studio Log Collection tool (vscollect) that let’s us capture the installation log. Then we download the build tools from the Visual Studio 2017 release channel feed.

Finally we are instaling the build tools in quiet mode,specifying the desired components. Of course you might wamt to change this list to fit your needs.

image

Installing additional tooling

You will most likely want to install additional tooling, besides the standard VS build tools. In my case, I want to install Node, the latest version of NET Core SDK and also web deploy. Many of these things can be installed easily using chocolatey, as shown below:

image

Installing .NET Core SDK can be done by simply downloading it and extract it and update the PATH environment variable:

image

Installing and configuring the Azure Pipelines Build Agent

Finally we want to installl the Azure Pipelines build agent and configure it. Installing the agent will be done when we are building the Docker image. Configuring it against your Azure DevOps organization must be done when starting the image, which means will do this in the CMD part of the Dockerfile, and supply the necessary parameters.

image

The InstallAgent.ps1 script simply extracts the downloaded agent :

image

ConfigureAgent.ps1 will be executed when the container is started, and here we are using the unattended install option for the Azure Pipelines agent to configure it against an Azure DevOps organization:

image

Building the Docker image

To build the image from the Dockerfile, run the following command:

docker build -t mybuildagent:1.0 -m 8GB .

I’m allocating 8GB of memory here to make sure the installation process won’t be too slow. In particular installing the build tools is pretty slow (around 20 minutes on my machine) and I’ve found that allocating more memory speeds it up a bit. As always, Docker caches all image layers so if you make a change to the Docker file, the build will go much faster the next time (unless you change the command that installs the build tools Smile

When the build is done you can run docker images to see your image.

Running the build agent

To start the image and connect it to your Azure DevOps organization, run the following command:

docker run -d -m 4GB –name <NAME> –storage-opt “size=100GB” -e TFS_URL=<ORGANIZATIONURL>-e TFS_PAT=<PAT> -e TFS_POOL_NAME=<POOL> -e TFS_AGENT_NAME=<NAME> mybuildagent:1.0

Replace the parameters in the above string:

  • NAME
    Name of the builds agent as it is registered in the build pool in Azure DevOps. Also the docker container will use the same name, which can be handy whe you are running multiple agents on the same host
  • ORGANIZATIONURL
    URL to your Azure DevOps account, e.g. https://dev.azure.com/contoso
  • PAT
    A personal access token that you need to crete in your Azure DevOps organization Make sure that the token has the AgentPools (read, manage) scope enabled
  • POOL
    The name of the agent pool in Azure DevOps that the agent should register in

When you run the agent from command line you will see the id of the started Docker container. For troubleshooting you can run docker logs <id> to see the output from the build agent running in the container

image

After around 30 seconds or so, you should see the agent appear in the list of available agents in your agent pool:

image

Happy building!

Deploy Azure Web App for Containers with ARM and Azure DevOps

Using Docker containers  for building and running your applications has many advantages such as consistent builds, build-once run anywhere and easy standardized packaging and deployment format, just to name a few.

When it comes to running the containers you might look into container orchestrators such as Kubernetes or Docker Swarm. Sometimes though, these orchestrators can be overkill for your applications. If you are developing web applications that have only a few dependent runtime components, another options is to use Azure Web App for Containers, which is a mouthful for saying that you can use your beloved Azure Web Apps with all the functionality that comes with it (easy scaling, SSL support etc), but deploy your code in a container. Best of both worlds, perhaps?

In this post I will show how you can create an ARM template that creates the Azure Web App with the necessary setting to connect it to an Azure Container Registry, and how you setup a Azure Pipeline to build and deploy the container.

The code for this blog post is available on GitHub:

https://github.com/jakobehn/containerwebapp

The release definition is available here:
https://dev.azure.com/jakob/blog

Prerequisites

  • An Azure subscription (duh)
  • An Azure Container Registry
  • An Azure DevOps project

Creating the ARM Template

First up is creating an ARM template that will deploy the web app resource to your Azure subscription. Creating an ARM template for a web app is easy, you can use the Azure Resource Group project in Visual Studio (this template is installed with the Azure SDK) and select the Web app template:

image_thumb1 


Now, we need to make some changes in order to deploy this web app as a container. FIrst of all we will change some settings of the App Service Plan.

Set the “kind” property to “linux”, to specify that this is a Linux hosted web app (Windows containers for Web Apps are in preview at the moment).

Then we also need to set the “reserved” property to true (The documentation just says: ‘If Linux app service plan true, false otherwise’ Smile )

{
   “apiVersion”: “2015-08-01”,
   “name”: “[parameters(‘hostingPlanName’)]”,
   “type”: “Microsoft.Web/serverfarms”,
   “location”: “[resourceGroup().location]”,
   “kind”:  “linux”,
   “tags”: {
     “displayName”: “HostingPlan”
   },
   “sku”: {
     “name”: “[parameters(‘skuName’)]”,
     “capacity”: “[parameters(‘skuCapacity’)]”
   },
   “properties”: {
     “name”: “[parameters(‘hostingPlanName’)]”,
     “reserved”: true
   }
},

For the web app definition, we need to set the “kind” property to “app,linux,container” to make this a containerized web app resource. We also need to set the DOCKER_CUSTOM_IMAGE_NAME to something. We will set the correct image later on from our deployment pipeline, but this property must be here when we create the web app resource.

{
   “apiVersion”: “2015-08-01”,
   “name”: “[variables(‘webSiteName’)]”,
   “type”: “Microsoft.Web/sites”,
   “kind”: “app,linux,container”,
   “location”: “[resourceGroup().location]”,
   “tags”: {
     “[concat(‘hidden-related:’, resourceGroup().id, ‘/providers/Microsoft.Web/serverfarms/’, parameters(‘hostingPlanName’))]”: “Resource”,
     “displayName”: “Website”
   },
   “dependsOn”: [
     “[resourceId(‘Microsoft.Web/serverfarms/’, parameters(‘hostingPlanName’))]”
   ],
   “properties”: {
     “name”: “[variables(‘webSiteName’)]”,
     “serverFarmId”: “[resourceId(‘Microsoft.Web/serverfarms’, parameters(‘hostingPlanName’))]”,
     “siteConfig”: {
       “DOCKER_CUSTOM_IMAGE_NAME”: “containerwebapp”
     }
   }
},

Again, the full source is available over att GitHub (see link at top)

Azure Pipeline

Let’s create a deployment pipeline that will build and push the image, and then deploy the ARM template and finally the web app container.

First up is the build definition, here I’m using YAML since it let’s me store the build definition in source control together with the rest of the application:

NB: You need to change the azureSubscriptionEndpoint and azureContainerRegistry to the name of your service endpoint and Azure container registry


azure-pipelines.yml

name: 1.0$(rev:.r)

trigger:

  – master

pool:

  vmImage: ‘Ubuntu-16.04’

steps:

  – task: Docker@1

    displayName: ‘Build image’

    inputs:

      azureSubscriptionEndpoint: ‘Azure Sponsorship’

      azureContainerRegistry: jakob.azurecr.io

      dockerFile: ContainerWebApp/Dockerfile

      useDefaultContext: false

      imageName: ‘containerwebapp:$(Build.BuildNumber)’

  – task: Docker@1

    displayName: ‘Push image’

    inputs:

      azureSubscriptionEndpoint: ‘Azure Sponsorship’

      azureContainerRegistry: jakob.azurecr.io

      command: ‘Push an image’

      imageName: ‘containerwebapp:$(Build.BuildNumber)’

  – task: PublishBuildArtifacts@1

    displayName: ‘Publish ARM template’

    inputs:

      PathtoPublish: ‘ContainerWebApp.ResourceGroup’

      ArtifactName: template

The build definition performs the following steps:

  1. Build the container image using the Docker task, where we point to the Dockerfile and give it an imagename
  2. Pushes the container image to Azure Container Registry
  3. Publishes the content of the Azure resource group project back to Azure Pipelines. This will be used when we deploy the resource group in the release definition

Running this buid should push an image to your container registry.

Now we will create a release definition that deployes the resource group and then the container web app.

First up is the resource group deployment. Here we use the Azure Resource Group Deployment task, where we point to the ARM template json file and the parameters file. We also override the name of the app hosting plan since that is an input parameter to the template.

image

Then we use the Azure App Service Deployment task to deploy the container to the web app. Note that we are using the preview 4.* version since that has support for deploying to Web App for Containers.

image

In the rest of the parameters for this task we specify the name of the container registry, the name of the image and the specific tag that we want to deploy. The tag is fetched from the build number of the associated build.

Finally we set the following app settings:

  • DOCKER_REGISTRY_SERVER_URL:                 The URL to the Docker registry
  • DOCKER_REGISTRY_SERVER_USERNAME:   The login to the registry. For ACR, this is the name of the registry
  • DOCKER_REGISTRY_SERVER_PASSWORD:   The password to the registry. For ACR, you can get this in the Access Keys blade in the Azure portal

image

That’s it. Running the release deployes the resource group (will take 1-2 minutes the first time) and then the container to the web app. Once done, you can browse the site and verify that it works as expected:

image

Running .NET Core Unit Tests with Docker and Azure Pipelines

Using Docker for compiling your code is great since that guarantees a consistent behaviour regardless of where you are building your code. No matter if it’s on the local dev machine or on a build server somewhere. It also reduces the need of installing any dependencies just to make the code compile. The only thing that you need to install is Docker!

When you create a ASP.NET Core project in Visual Studio and add Docker support for it you will get a Docker file that looks something like this:

FROM microsoft/dotnet:2.1-aspnetcore-runtime AS base 
WORKDIR /app 
EXPOSE 80 
EXPOSE 443 

FROM microsoft/dotnet:2.1-sdk AS build 
WORKDIR /src 
COPY ["WebApplication1/WebApplication1.csproj", "WebApplication1/"] 
RUN dotnet restore "WebApplication1/WebApplication1.csproj" 
COPY . . 
WORKDIR "/src/WebApplication1" 
RUN dotnet build "WebApplication1.csproj" -c Release -o /app 

FROM build AS publish 
RUN dotnet publish "WebApplication1.csproj" -c Release -o /app 

FROM base AS final 
WORKDIR /app 
COPY --from=publish /app . 
ENTRYPOINT ["dotnet", "WebApplication1.dll"]

This is an example of a multistage Docker build. The first stage is based on the .NET Core SDK Docker image in which the code is restored, built and published. The second phase uses the smaller .NET Core runtime Docker image, to which the generated artifacts from the first phase is copied into.

The result is a smaller Docker image that will be pushed to a Docker registry and later on deployed to test- and production environments. Smaller images means faster download and startup times. Since it doesn’t contain as many SDKs etc, it also means that  the surface area for security holes is much smaller.

Now, this will compile just fine locally, and settting a build definition in Azure Pipelines is easy-peasy. Using the default Docker container build pipeline template, results in a build like this:

image

But, we want to run unit tests also, and then publish the test results back to Azure DevOps. How can we do this?

Run Unit Tests in Docker

First of all we need to build and run the tests inside the container, so we need to extend the Docker file. In this sample, I have added a XUnit test project called WebApplication1.UnitTests.

FROM microsoft/dotnet:2.1-aspnetcore-runtime AS base 
WORKDIR /app 
EXPOSE 80 
EXPOSE 443

FROM microsoft/dotnet:2.1-sdk AS build 
WORKDIR /src 
COPY ["WebApplication1/WebApplication1.csproj", "WebApplication1/"] 
COPY ["WebApplication1.UnitTests/WebApplication1.UnitTests.csproj", "WebApplication1.UnitTests/"] 
RUN dotnet restore "WebApplication1/WebApplication1.csproj" 
RUN dotnet restore "WebApplication1.UnitTests/WebApplication1.UnitTests.csproj" 
COPY . . 
RUN dotnet build "WebApplication1/WebApplication1.csproj" -c Release -o /app 
RUN dotnet build "WebApplication1.UnitTests/WebApplication1.UnitTests.csproj" -c Release -o /app 

RUN dotnet test "WebApplication1.UnitTests/WebApplication1.UnitTests.csproj" --logger "trx;LogFileName=webapplication1.trx" 

FROM build AS publish 
RUN dotnet publish "WebApplication1.csproj" -c Release -o /app 

FROM base AS final 
WORKDIR /app 
COPY --from=publish /app . 
ENTRYPOINT ["dotnet", "WebApplication1.dll"]

Now we are also restoring and compiling the test project, and then we run dotnet test to run the unit tests. To be able to publish the unit test results to Azure DevOps, we are using the –logger parameter which instructs dotnet to output a TRX file.

Now comes the tricky part. When we run these tests as part of a build, the results end up inside the container. To publish the test results we need to access the results from outside the container. Docker volumes will not help us here, since we aren’t running the container, we are building it. Docker volumes are not supported when building a container.

Instead we will add another task to our build definition that will use scripts to build the image, including running the unit tests, and the copiying the test results file from the container to a folder on the build server. We use the Docker Copy command to do this:

docker build -f ./WebApplication1/Dockerfile --target build -t webapplication1:$(build.buildid) . 
docker create -ti --name testcontainer webapplication1:$(build.buildid) 
docker cp testcontainer:/src/WebApplication1.UnitTests/TestResults/ $(Build.ArtifactStagingDirectory)/testresults 
docker rm -fv testcontainer

Here we first build the image by using docker build. By using the –target parameter it will only execute the first phase of the build (there is no meaning to continue if the tests are failing). To access the file inside the container, we use docker create which is a way to create and configure a container before actually starting it. In this case we don’t need to start it, just use docker cp to extract the test result files to the host.

Now we will have the TRX test results file in the artifact folder on the build server, which means we can just add a Publish Test Results task to our build definition:

image

And voila, running the build now runs the unit tests and we can see the test results in the build summary as expected:

image