Accessing Azure Artifacts feed in a Docker build

I’ve recently given talks at conferences and user groups on the topic of using Docker as a build engine, describing the builds using a Dockerfile. This has several advantages, such as fully consistent build no matter where you run it, no dependencies necessary except Docker.

Image result for docker

Some things become a bit tricker though, I’ve blogged previously about how to run unit tests in a Docker build, including getting the test results out of the build container afterwards.

Another thing that you will soon hit if you start with Dockerfile builds, is how to restore packages from an authenticated NuGet feed, such as Azure Artifacts. The reason this is problematic is that the build will run inside a docker container, as a Docker user that can’t authenticate to anything by default. If you build a projects that references a package located in an Azure Artifacts feed, you’ll get an error like this:

Step 4/15 : RUN dotnet restore -s “https://pkgs.dev.azure.com/jakob/_packaging/IgniteTour/nuget/v3/index.json” -s “https://api.nuget.org/v3/index.json” “WebApplication1/WebApplication1.csproj”
  —> Running in 7071b05e2065
/usr/share/dotnet/sdk/2.2.202/NuGet.targets(119,5): error : Unable to load the service index for source
https://pkgs.dev.azure.com/jakob/_packaging/IgniteTour/nuget/v3/index.json. [/src/WebApplication1/WebApplication1.csproj]
/usr/share/dotnet/sdk/2.2.202/NuGet.targets(119,5): error :   Response status code does not indicate success: 401 (Unauthorized). [/src/WebApplication1/WebApplication1.csproj]
The command ‘/bin/sh -c dotnet restore -s “
https://pkgs.dev.azure.com/jakob/_packaging/IgniteTour/nuget/v3/index.json” -s “https://api.nuget.org/v3/index.json” “WebApplication1/WebApplication1.csproj”‘ returned a non-zero code: 1

The output log above shows a 401 (Unauthorized) when we run a dotnet restore command.

Using the Azure Artifacts Credential Provider in a Dockerfile

Image result for azure artifacts

To solve this, Microsoft supplies a credential provider for Azure Artifacts, that you can find here https://github.com/microsoft/artifacts-credprovider

NuGet wil look for installed credential providers and, depending on context, either prompt the user for credentials and store it in the credential manager of the current OS, or for CI scenarios we need to pass in the necessary informtion and the credential provider will then automatically do the authentication.

To use the credential provider in a Dockerfile build, you need to download and configure it, and also be sure to specify the feed when you restore your projects. Here is snippet from a Dockerfile that does just this:

NB: The full source code is available here https://dev.azure.com/jakob/ignitetour/_git/DockerBuilds?path=%2F4.%20NugetRestore&version=GBmaster

# Install Credential Provider and set env variables to enable Nuget restore with auth

ARG PAT
RUN wget -qO- https://raw.githubusercontent.com/Microsoft/artifacts-credprovider/master/helpers/installcredprovider.sh | bash
ENV NUGET_CREDENTIALPROVIDER_SESSIONTOKENCACHE_ENABLED true
ENV VSS_NUGET_EXTERNAL_FEED_ENDPOINTS “{\”endpointCredentials\”: [{\”endpoint\”:\”https://pkgs.dev.azure.com/jakob/_packaging/IgniteTour/nuget/v3/index.json\”, \”password\”:\”${PAT}\”}]}”

# Restore packages using authenticated feed
COPY [“WebApplication1/WebApplication1.csproj”, “WebApplication1/”]
RUN dotnet restore -s “https://pkgs.dev.azure.com/jakob/_packaging/IgniteTour/nuget/v3/index.json” -s “https://api.nuget.org/v3/index.json” “WebApplication1/WebApplication1.csproj”

The  VSS_NUGET_EXTERNAL_FEED_ENDPOINTS  is an environment variable that should contain the endpoint credentials for any feed that you need to authenticate against, in a JSON Format. The personal access token is sent to the Dockerfile build using an argument called PAT.

To build this, create a Personal Access Token in your Azure DevOps account, with permissions to read your feeds, then run the following command:

docker build -f WebApplication1\Dockerfile -t meetup/demo4 . –build-arg PAT=<token>

You should now see the restore complete successfully

Creating a Windows Container Build Agent for Azure Pipelines

Having automated builds that are stable and predictable is so important in order to succeed with CI/CD. One important practice to enable this is to have a fully scriptable build environment that lets you deploy multiple, identical, build envionment hosts. This can be done by using image tooling such as Packer from HahsiCorp. Another option is to use Docker which is what I am using in this post.

Using Docker will will crete a Dockerfile that specifies the content of the image in which builds will run. This image should contain the SDK’s and tooling necessary to build and test your projects. It will also contain the build agent for your favourite CI server that will let you spin up a new agent in seconds using the docker image.

 

In this post I will walk you through how to create a Windows container image for Azure Pipelines/Azure DevOps Server that contains the necessary build tools for building .NET Framework and .NET Core projects.

I am using Windows containers here because I want to be able to build full .NET Framework projects (in addition to .NET core of course). If you only use .NET Core things are much simpler, there is even an existing Docker image from Microsoft thath contains the build agent here: https://hub.docker.com/r/microsoft/vsts-agent/

 

All files referred to in this blog post are available over at GitHub:
https://github.com/jakobehn/WindowsContainerBuildImage

 

Prerequisites:

You need to have Docker Desktop install on your machine to build the image.

I also recommend using Visual Studio Code with the Docker extension installed for authoring Dockerfiles (see https://code.visualstudio.com/docs/azure/docker)

Specifying the base image

All Docker images must inherit from a base image. In this case we will start with one of the images from Microsoft that ships with the full .NET Framework  SDK, microsoft/dotnet-framework.

If you have the Docker extension in VS Code installed, you can browse existing images and tags directly from the editor:

image

I’m going to use the image with .NET Framework 4.7.2 SDK installed running in Windows Server Core:

image

Installing Visual Studio Build Tools

In order to build .NET Framework apps we need to have the proper build tools installed. Installing Visual Studio in a Docker container is possible but not recommended. Instead we can install Visual Studio Build Tools, and select wich components to install.

To understand which components that are available and which identifer they have, this page is very userful. It contains all available components that you can install in Visual Studio Build Tools 2017:
https://docs.microsoft.com/en-us/visualstudio/install/workload-component-id-vs-build-tools?view=vs-2017

In the lines shown below, I’m first downloading and installing Visual Studio Log Collection tool (vscollect) that let’s us capture the installation log. Then we download the build tools from the Visual Studio 2017 release channel feed.

Finally we are instaling the build tools in quiet mode,specifying the desired components. Of course you might wamt to change this list to fit your needs.

image

Installing additional tooling

You will most likely want to install additional tooling, besides the standard VS build tools. In my case, I want to install Node, the latest version of NET Core SDK and also web deploy. Many of these things can be installed easily using chocolatey, as shown below:

image

Installing .NET Core SDK can be done by simply downloading it and extract it and update the PATH environment variable:

image

Installing and configuring the Azure Pipelines Build Agent

Finally we want to installl the Azure Pipelines build agent and configure it. Installing the agent will be done when we are building the Docker image. Configuring it against your Azure DevOps organization must be done when starting the image, which means will do this in the CMD part of the Dockerfile, and supply the necessary parameters.

image

The InstallAgent.ps1 script simply extracts the downloaded agent :

image

ConfigureAgent.ps1 will be executed when the container is started, and here we are using the unattended install option for the Azure Pipelines agent to configure it against an Azure DevOps organization:

image

Building the Docker image

To build the image from the Dockerfile, run the following command:

docker build -t mybuildagent:1.0 -m 8GB .

I’m allocating 8GB of memory here to make sure the installation process won’t be too slow. In particular installing the build tools is pretty slow (around 20 minutes on my machine) and I’ve found that allocating more memory speeds it up a bit. As always, Docker caches all image layers so if you make a change to the Docker file, the build will go much faster the next time (unless you change the command that installs the build tools Smile

When the build is done you can run docker images to see your image.

Running the build agent

To start the image and connect it to your Azure DevOps organization, run the following command:

docker run -d -m 4GB –name <NAME> –storage-opt “size=100GB” -e TFS_URL=<ORGANIZATIONURL>-e TFS_PAT=<PAT> -e TFS_POOL_NAME=<POOL> -e TFS_AGENT_NAME=<NAME> mybuildagent:1.0

Replace the parameters in the above string:

  • NAME
    Name of the builds agent as it is registered in the build pool in Azure DevOps. Also the docker container will use the same name, which can be handy whe you are running multiple agents on the same host
  • ORGANIZATIONURL
    URL to your Azure DevOps account, e.g. https://dev.azure.com/contoso
  • PAT
    A personal access token that you need to crete in your Azure DevOps organization Make sure that the token has the AgentPools (read, manage) scope enabled
  • POOL
    The name of the agent pool in Azure DevOps that the agent should register in

When you run the agent from command line you will see the id of the started Docker container. For troubleshooting you can run docker logs <id> to see the output from the build agent running in the container

image

After around 30 seconds or so, you should see the agent appear in the list of available agents in your agent pool:

image

Happy building!

Deploy Azure Web App for Containers with ARM and Azure DevOps

Using Docker containers  for building and running your applications has many advantages such as consistent builds, build-once run anywhere and easy standardized packaging and deployment format, just to name a few.

When it comes to running the containers you might look into container orchestrators such as Kubernetes or Docker Swarm. Sometimes though, these orchestrators can be overkill for your applications. If you are developing web applications that have only a few dependent runtime components, another options is to use Azure Web App for Containers, which is a mouthful for saying that you can use your beloved Azure Web Apps with all the functionality that comes with it (easy scaling, SSL support etc), but deploy your code in a container. Best of both worlds, perhaps?

In this post I will show how you can create an ARM template that creates the Azure Web App with the necessary setting to connect it to an Azure Container Registry, and how you setup a Azure Pipeline to build and deploy the container.

The code for this blog post is available on GitHub:

https://github.com/jakobehn/containerwebapp

The release definition is available here:
https://dev.azure.com/jakob/blog

Prerequisites

  • An Azure subscription (duh)
  • An Azure Container Registry
  • An Azure DevOps project

Creating the ARM Template

First up is creating an ARM template that will deploy the web app resource to your Azure subscription. Creating an ARM template for a web app is easy, you can use the Azure Resource Group project in Visual Studio (this template is installed with the Azure SDK) and select the Web app template:

image_thumb1 


Now, we need to make some changes in order to deploy this web app as a container. FIrst of all we will change some settings of the App Service Plan.

Set the “kind” property to “linux”, to specify that this is a Linux hosted web app (Windows containers for Web Apps are in preview at the moment).

Then we also need to set the “reserved” property to true (The documentation just says: ‘If Linux app service plan true, false otherwise’ Smile )

{
   “apiVersion”: “2015-08-01”,
   “name”: “[parameters(‘hostingPlanName’)]”,
   “type”: “Microsoft.Web/serverfarms”,
   “location”: “[resourceGroup().location]”,
   “kind”:  “linux”,
   “tags”: {
     “displayName”: “HostingPlan”
   },
   “sku”: {
     “name”: “[parameters(‘skuName’)]”,
     “capacity”: “[parameters(‘skuCapacity’)]”
   },
   “properties”: {
     “name”: “[parameters(‘hostingPlanName’)]”,
     “reserved”: true
   }
},

For the web app definition, we need to set the “kind” property to “app,linux,container” to make this a containerized web app resource. We also need to set the DOCKER_CUSTOM_IMAGE_NAME to something. We will set the correct image later on from our deployment pipeline, but this property must be here when we create the web app resource.

{
   “apiVersion”: “2015-08-01”,
   “name”: “[variables(‘webSiteName’)]”,
   “type”: “Microsoft.Web/sites”,
   “kind”: “app,linux,container”,
   “location”: “[resourceGroup().location]”,
   “tags”: {
     “[concat(‘hidden-related:’, resourceGroup().id, ‘/providers/Microsoft.Web/serverfarms/’, parameters(‘hostingPlanName’))]”: “Resource”,
     “displayName”: “Website”
   },
   “dependsOn”: [
     “[resourceId(‘Microsoft.Web/serverfarms/’, parameters(‘hostingPlanName’))]”
   ],
   “properties”: {
     “name”: “[variables(‘webSiteName’)]”,
     “serverFarmId”: “[resourceId(‘Microsoft.Web/serverfarms’, parameters(‘hostingPlanName’))]”,
     “siteConfig”: {
       “DOCKER_CUSTOM_IMAGE_NAME”: “containerwebapp”
     }
   }
},

Again, the full source is available over att GitHub (see link at top)

Azure Pipeline

Let’s create a deployment pipeline that will build and push the image, and then deploy the ARM template and finally the web app container.

First up is the build definition, here I’m using YAML since it let’s me store the build definition in source control together with the rest of the application:

NB: You need to change the azureSubscriptionEndpoint and azureContainerRegistry to the name of your service endpoint and Azure container registry


azure-pipelines.yml

name: 1.0$(rev:.r)

trigger:

  – master

pool:

  vmImage: ‘Ubuntu-16.04’

steps:

  – task: Docker@1

    displayName: ‘Build image’

    inputs:

      azureSubscriptionEndpoint: ‘Azure Sponsorship’

      azureContainerRegistry: jakob.azurecr.io

      dockerFile: ContainerWebApp/Dockerfile

      useDefaultContext: false

      imageName: ‘containerwebapp:$(Build.BuildNumber)’

  – task: Docker@1

    displayName: ‘Push image’

    inputs:

      azureSubscriptionEndpoint: ‘Azure Sponsorship’

      azureContainerRegistry: jakob.azurecr.io

      command: ‘Push an image’

      imageName: ‘containerwebapp:$(Build.BuildNumber)’

  – task: PublishBuildArtifacts@1

    displayName: ‘Publish ARM template’

    inputs:

      PathtoPublish: ‘ContainerWebApp.ResourceGroup’

      ArtifactName: template

The build definition performs the following steps:

  1. Build the container image using the Docker task, where we point to the Dockerfile and give it an imagename
  2. Pushes the container image to Azure Container Registry
  3. Publishes the content of the Azure resource group project back to Azure Pipelines. This will be used when we deploy the resource group in the release definition

Running this buid should push an image to your container registry.

Now we will create a release definition that deployes the resource group and then the container web app.

First up is the resource group deployment. Here we use the Azure Resource Group Deployment task, where we point to the ARM template json file and the parameters file. We also override the name of the app hosting plan since that is an input parameter to the template.

image

Then we use the Azure App Service Deployment task to deploy the container to the web app. Note that we are using the preview 4.* version since that has support for deploying to Web App for Containers.

image

In the rest of the parameters for this task we specify the name of the container registry, the name of the image and the specific tag that we want to deploy. The tag is fetched from the build number of the associated build.

Finally we set the following app settings:

  • DOCKER_REGISTRY_SERVER_URL:                 The URL to the Docker registry
  • DOCKER_REGISTRY_SERVER_USERNAME:   The login to the registry. For ACR, this is the name of the registry
  • DOCKER_REGISTRY_SERVER_PASSWORD:   The password to the registry. For ACR, you can get this in the Access Keys blade in the Azure portal

image

That’s it. Running the release deployes the resource group (will take 1-2 minutes the first time) and then the container to the web app. Once done, you can browse the site and verify that it works as expected:

image

Running .NET Core Unit Tests with Docker and Azure Pipelines

Using Docker for compiling your code is great since that guarantees a consistent behaviour regardless of where you are building your code, if it’s on the local dev machine or on a build server somewhere. It also reduces the need of installing any dependencies just to make the code compile, the only thing that you need is Docker.

When you create a ASP.NET Core project in Visual Studio and add Docker support for it you will get a Docker file that looks something like this:

FROM microsoft/dotnet:2.1-aspnetcore-runtime AS base 
WORKDIR /app 
EXPOSE 80 
EXPOSE 443 

FROM microsoft/dotnet:2.1-sdk AS build 
WORKDIR /src 
COPY ["WebApplication1/WebApplication1.csproj", "WebApplication1/"] 
RUN dotnet restore "WebApplication1/WebApplication1.csproj" 
COPY . . 
WORKDIR "/src/WebApplication1" 
RUN dotnet build "WebApplication1.csproj" -c Release -o /app 

FROM build AS publish 
RUN dotnet publish "WebApplication1.csproj" -c Release -o /app 

FROM base AS final 
WORKDIR /app 
COPY --from=publish /app . 
ENTRYPOINT ["dotnet", "WebApplication1.dll"]

This is an example of a multistage Docker build. The first stage is based on the .NET Core SDK Docker image in which the code is restored, built and published. The second phase uses the smaller .NET Core runtime Docker image, to which the generated artifacts from the first phase is copied into.

This results in a smaller Docker image that will be pushed to a Docker registry and later on deployed onto testing and production environments. Smaller images means faster download and startup times, but also since it doesn’t contain as many SDKs etc the surface area for security holes is typically smaller.

Now, this will compile just fine locally, and settting a build definition in Azure Pipelines is easy-peasy. Using the default Docker container build pipeline template, results in a build like this:

image

But, we want to run unit tests also, and then publish the test results back to Azure DevOps. How can we do this?

First of all we need to build and run the tests inside the container, so we need to extend the Docker file. In this sample, I have added a XUnit test project called WebApplication1.UnitTests.

FROM microsoft/dotnet:2.1-aspnetcore-runtime AS base 
WORKDIR /app 
EXPOSE 80 
EXPOSE 443

FROM microsoft/dotnet:2.1-sdk AS build 
WORKDIR /src 
COPY ["WebApplication1/WebApplication1.csproj", "WebApplication1/"] 
COPY ["WebApplication1.UnitTests/WebApplication1.UnitTests.csproj", "WebApplication1.UnitTests/"] 
RUN dotnet restore "WebApplication1/WebApplication1.csproj" 
RUN dotnet restore "WebApplication1.UnitTests/WebApplication1.UnitTests.csproj" 
COPY . . 
RUN dotnet build "WebApplication1/WebApplication1.csproj" -c Release -o /app 
RUN dotnet build "WebApplication1.UnitTests/WebApplication1.UnitTests.csproj" -c Release -o /app 

RUN dotnet test "WebApplication1.UnitTests/WebApplication1.UnitTests.csproj" --logger "trx;LogFileName=webapplication1.trx" 

FROM build AS publish 
RUN dotnet publish "WebApplication1.csproj" -c Release -o /app 

FROM base AS final 
WORKDIR /app 
COPY --from=publish /app . 
ENTRYPOINT ["dotnet", "WebApplication1.dll"]

Now we are also restoring and compiling the test project, and then we run dotnet test to actually run the unit tests. Since we want to later on be able to publish the results of the unit tests to Azure DevOps, we are using the –logger parameter to specify that dotnet test should output a TRX file and we also give it a name for clarity.

Now comes the tricky part. When we run these tests as part of a build, the results end up inside the container. To be able to publish the test results we need to access this file from outside the container. Locally we could have used Docker volumes to do this, but this will not work on a hosted build server.

Instead we will add another task to our build definition that will use scripts to build the image, including running the unit tests, and the copiying the test results file from the container to a folder on the build server. We use the Docker Copy command to do this:

docker build -f ./WebApplication1/Dockerfile --target build -t webapplication1:$(build.buildid) . 
docker create -ti --name testcontainer webapplication1:$(build.buildid) 
docker cp testcontainer:/src/WebApplication1.UnitTests/TestResults/ $(Build.ArtifactStagingDirectory)/testresults 
docker rm -fv testcontainer

Here we first build the image by using docker build. By using the –target parameter it will only execute the first phase of the build (there is no meaning to continue if the tests are failing). To be able to access the file inside the container, we use docker create which is a way to create and configure a container before actually starting it. In this case we don’t need to start it, just use docker cp to extract the test result files to the host.

Now we will have the TRX test results file in the artifact folder on the build server, which means we can just add a Publish Test Results task to our build definition:

image

And voila, running the build now runs the unit tests and we can see the test results in the build summary as expected:

image

Meet Azure DevOps – formerly known as VSTS

Today Microsoft announced Azure DevOps, which is partly a rebranding of the existing Visual Studio Team Services but also has some exciting news.

The gist of the rebranding is that Azure DevOps is now a suite of service, where each service cna be acquired and used separately from the other ones. If you only want to use source control (and use some other CI/CD system) that’s fine.

Do you have your code over at GitHub and want to use the CI/CD services in Azure DevOps? Works perfectly! By breaking the whole suite down into smaller services, it will make it easier for customers to find the best fit for their needs, without having to invest in the whole suite. Of course, you will still be able to easily get the whole suite when creating new accounts.

The new services as of today are (from the link above):

Azure Pipelines Azure Pipelines

CI/CD that works with any language, platform, and cloud. Connect to GitHub or any Git repository and deploy continuously.

NB: This also includes a very generous offering targeted towards Open Source projects, where you get unlimited build minutes and 10 parallell build jobs

Azure Boards Azure Boards

Powerful work tracking with Kanban boards, backlogs, team dashboards, and custom reporting.

Azure Artifacts Azure Artifacts

Maven, npm, and NuGet package feeds from public and private sources.

Azure Repos Azure Repos

Unlimited cloud-hosted private Git repos for your project. Collaborative pull requests, advanced file management, and more.

Azure Test Plans Azure Test Plans

All in one planned and exploratory testing solution.

A Deep Dive into continuous delivery and Microservices on Azure

In March, Mathias Olausson and I will run two fullday deep dive in continuous delivery and microservices on Azure.

During the day you will learn about microservice architecture and how to build and deploy these using container technology and cloud services in Microsoft Azure.

The agenda looks like this:

  • Microservices architecture
    • Design principles
    • Breaking up the monolith
  • Implementing trunk based development practices with Visual Studio Team Services
    • Feature flags
    • Pull requests
    • Branch/Build policies
  • Using container techonologies for packaging and delivering applications with zero downtime
    • Docker for Windows
    • Kubernetes
    • Azure Container registry
    • Azure Container Services (AKS)

  • Deployment pipelines with Visual Studio Team Services
    • Build automation
    • Release management

Read more about the course here, and sign up:

https://www.activesolution.se/event/a-deep-dive-into-continuous-delivery-and-microservices-on-azure/

Hope to see you either in Gothenburg or in Stockholm!

Deploying ARM Templates using Visual Studio Team Services

If you are running your applications in Azure, and in particular on PaaS, you need to take a look ARM templates as a way to manage your environments. ARM templates let’s you define and deploy your entire environment using JSON files that you store together with the rest of your source code. The deployment of ARM templates are idempotent, meaning that you can run them many times and it will always produce the same result.

Image result for azure ARM templates

In this post, I will how you how to deploy ARM templates together with your application using Visual Studio Team Services. As you will see, I will not use the out of the box task for doing this, since it has some limitations. Instead we will use a PowerShell script to eexecute the deployment of an ARM template.

 

The overall steps are:

  • Defining our ARM template for our environment.
  • Tokenize the ARM template parameters file
  • Create a PowerShell script that deploys the ARM template
  • Deploy everything from a VSTS release definition.

Let’s get started with the ARM template.

ARM Template

In this case, I will deploy an ARM template consisting of a Azure web app, a SQL Server + database and a Redis Cache. The web app and sql resources are easy to deploy, since we can supply all the input from my release definition.
With the Redis cache however, Azure Resource Manager will create some information (such as the primarykey) as part of the deployment, which means we need to read this information from the output of the ARM template deployment.

Here is the outline of our ARM template:

image

 

Note the outputs section that is selected above, here we define what output we want to capture once the reource group has been deployed. In this case, I have defined three output variables:

  • redis_host
    The fully qualified edish host name
  • redis_port
    The secure port that will be used to communicate with the cache
  • redis_primatykey
    The access key that we will use to authenticate

Since our web application will communicate with the Redis cache, we need to fetch this information from the ARM template deployment and store them in our web.cofig file. You will see later on how this can be done.
 

Learn more about authoring ARM templates here: https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-authoring-templates

 

ARM Template Tokenization

When deploying our template in different environments (dev, test, prod…) we need to supply the information specific to those environment. In VSTS Release Management, the information is stored using environment variables.
A common solution is to tokenize the files that is needed for deployment and then replace these tokens with the corresponding environment variable.

To do this, we add a separate parameters file for the template that contains all the parameters but all the values are replaces with tokens:


{
    “$schema”: “
http://schema.management.azure.com/schemas/2015-01-01/deploymentParameters.json#”,
    “contentVersion”: “1.0.0.0”,
  “parameters”: {
    “hostingPlanName”: {
      “value”: “__HOSTINGPLANNAME__”
    },
    “administratorLogin”: {
      “value”: “__ADMINISTRATORLOGIN__”
    },
    “administratorLoginPassword”: {
      “value”: “__ADMINISTRATORLOGINPASSWORD__”
    },
    “databaseName”: {
      “value”: “__DATABASENAME__”
    },
    “webSiteName”: {
      “value”: “__WEBAPPNAME__”
    },
    “sqlServerName”: {
      “value”: “__SQLSERVERNAME__”
    },
    “dictionaryName”: {
      “value”: “__DATABASENAMEDICTIONARY__”
    },
    “extranetName”: {
      “value”: “__DATABASENAMEEXTRANET__”
    },
    “instanceCacheName”: {
      “value”: “__INSTANCECACHENAME__”
    }

  }
}


We wil then replace these tokens just before the template is deployed.

PowerShell script

There is an existing task for creating and updating ARM templates, called Azure Resource Group Deployment. This task let’s us point to an existing ARM template and the corresponding parameter file.

Here is an example how how this task is typically used:

 

image

 

The problem with this task is that it has very limited support for output parameters. As you can see in the image above, you can map a variable to the output called Resource Group. Unfortunately there is an assumption that the resource group that you are creating contains virtual machines. If you execute this task with an ARM template containing for example an Azure Web App you will get the following error when trying to map the output to a variable:

 

2017-01-23T09:09:49.8436157Z ##[error]The ‘Get-AzureVM’ command was found in the module ‘Azure’, but the module could not be loaded. For more information, run ‘Import-Module Azure’.

So, to be able to read our output values we need to use PowerShell instead, which is arguably a better choice anyway since it allows you to run and test the deployment locally,  saving you a lot of time.

When we create an Azure Resource Group project in Visual Studio, we get a PowerShell script that we can use as a starting point.

 

image

 

Most part of this script handles the case where we need to upload artifacts as part of the resource group deployment. In this case we don’t need this, we deploy all our artifacts from RM after the resource group has been deployed.

Here is our PowerShell script that we will use to deploy the template:


#Requires -Version 3.0
#Requires -Module AzureRM.Resources
#Requires -Module Azure.Storage

Param(
    [string] [Parameter(Mandatory=$true)] $ResourceGroupLocation,
    [string] [Parameter(Mandatory=$true)] $ResourceGroupName,
    [string] [Parameter(Mandatory=$true)] $TemplateFile,
    [string] [Parameter(Mandatory=$true)] $TemplateParametersFile
)

Import-Module Azure -ErrorAction SilentlyContinue

try {
    [Microsoft.Azure.Common.Authentication.AzureSession]::ClientFactory.AddUserAgent(“VSAzureTools-$UI$($host.name)”.replace(” “,”_”), “2.9”)
} catch { }

Set-StrictMode -Version 3

$TemplateFile = [System.IO.Path]::GetFullPath([System.IO.Path]::Combine($PSScriptRoot, $TemplateFile))
$TemplateParametersFile = [System.IO.Path]::GetFullPath([System.IO.Path]::Combine($PSScriptRoot, $TemplateParametersFile))

# Create or update the resource group using the specified template file and template parameters file
New-AzureRmResourceGroup -Name $ResourceGroupName -Location $ResourceGroupLocation -Verbose -Force -ErrorAction Stop

$output = (New-AzureRmResourceGroupDeployment -Name ((Get-ChildItem $TemplateFile).BaseName + ‘-‘ + ((Get-Date).ToUniversalTime()).ToString(‘MMdd-HHmm’)) `
                                   -ResourceGroupName $ResourceGroupName `
                                   -TemplateFile $TemplateFile `
                                   -TemplateParameterFile $TemplateParametersFile -Force -Verbose)

Write-Output (“##vso[task.setvariable variable=REDISSERVER]” + $output.Outputs[‘redis_host’].Value)
Write-Output (“##vso[task.setvariable variable=REDISPORT]” + $output.Outputs[‘redis_port’].Value)
Write-Output (“##vso[task.setvariable variable=REDISPASSWORD;issecret=true]” + $output.Outputs[‘redis_primarykey’].Value)


The special part of this script is the last three lines. Here, we read the output variables that we defined in the ARM template and then we use one of the VSTS logging commands to map these into variables that we can use in our release definition.

The syntax of the SetVariable logging command is ##vso[task.setvariable variable=NAME]<VARIABLEVALUE>. 

 

Note: You can read more about these commands at https://github.com/Microsoft/vsts-tasks/blob/master/docs/authoring/commands.md

 

Release Definition

Finally we can put all of this together by creating a release definition that deploys the ARM template.


Note
: You will of course need to create a build definition that packages your scripts, ARM templates and deployment artifacts. I won’t show this here, but just reference the outputs from an existing build definition.

 

Here is what the release definition will look like:

image

 

Let’s walk through the steps:

  1. Replace tokens
    Here we replace the tokens in our parameters.json file that we definied earlier. There are several tasks in the marketplace for doing token replacement, I’m using the one from Guillaume Rouchon (https://github.com/qetza/vsts-replacetokens-task#readme)
  2. Deploy Azure environment
    Run the PowerShell scipt using the Azure PowerShell task. This task handles the connection to Azure, so we don’t have to think about that.

    image

    Here I reference the PowerShell script from the build output artifacts, and also I supply the necessary parameters to the PS script:

    Script Arguments
    -ResourceGroupLocation “$(resourceGroupLocation)” -ResourceGroupName $(resourceGroupName) -TemplateFile “$(System.DefaultWorkingDirectory)/SampleApp.CI/environment/templates/sampleapp.json” -TemplateParametersFile “$(System.DefaultWorkingDirectory)/SampleApp.CI/environment/templates/sampleapp.parameters.json”

  3. Replace tokens
    Now we need to update the tokens in our SetParameters file, that is used by web deploy. It is important that we run this task after running the deploy azure enviroment script, since we need the output variables from the resource group deployment. Remember, these variables are now available as environment variables, so they will be inserted in the same way as the variables that we have defined manually.

  4. Deploy Web app + Deploy SQL Database
    These steps just performs a simple deployment of an Azure Web App and a SQL dacpac deployment.

 

That’s it, happy deployment! Smile

New Swedish Meetup Group for Microsoft ALM and DevOps

We have decided that it is time to create a meetup group for people that are interested in the Microsoft ALM and DevOps story!

image

 

Together with Mathias Olausson and a few other people we have created a new Meetup group and announced the first meeting.


Our plan is to continue meeting every month or so to learn about and dicuss new concept and ideas in the area of ALRM and DevOps on the Microsoft stack. This is a wide area, which spans all roles in the development process,
so there will be something for everyone.

 

First meetup: Microsoft Team Services Agile Transformation Story + VS ALM Update

The first meeting is set to October 25th, where we will have Jose Rady Allende, a Program Manager on the Visual Studio Team Services tean, join us online to talk about the Microsoft Team Service Agile Transformation story.
We’ll also going to have a few lightning talks where we will talk about recent new additions to the TFS/VSTS platform

Meeting link:

http://www.meetup.com/swedish-ms-alm-devops/events/234449734/

 

There are already around 25 people that have signed up for it, so sign up before it gets full!

 

Heop to see you there!

 

Image result for donovan brown devops

Using Web Deploy in Visual Studio Team Services Release Management

This post does not really cover something new, but since I find myself explain this to people now and then, I thought that I’d write a quick post on the subject.

So, we want to create a web deploy package as part of our automated build, and then take this package and deploy it to multiple environments, where each environment can have different configuration settings, using VSTS Release Management. Since we only want to build our package once, we have to apply the environment specific settings at deployment time, which means we will use Web Deploy parameters.

 

Here are the overall steps needed:

  1. – Create a parameters.xml file in your web project
  2. – Create a publish profile for the web deploy package
  3. – Set up a VSTS build creates the web deploy package, and uploads the package to the server
  4. – Create a Release definition in VSTS that consumes the web deploy package
  5. – In each RM environment, replaces the tokens in the SetParameters file

Let’s run through these steps in detail:

Create a parameters.xml file

As you will see later on, a publish profile contains configurable settings for the web site name and any connection strings,that will end up in the *.SetParameters.xml file that is used when at deployment time. But in order for other configuration settings, like appSettings, to end up in this file, you need to define these settings. This is done by creating a file called parameters.xml in the root of your web application.

Tip: A fellow MVP, Richard Fennell,  has created a nifty Visual Studio extension that simplifies the process of creating the parameters.xml file. It will look at your web.config file and the create a parameters.xml file with all the settings that it finds.

image

 

In this case, I have three application settings in the web.config file, so I end up with this parameters.xml file. Note that I have set the defaultvalue attribute for all parameters to __TOKEN__. These are the configuration values that will end up in the MyApp.SetParameters.xml file, together with the web deployment package. We will replaced these values at deployment time, by a task in our release definition.

<parameters> <parameter name="IsDevelopment" description="Description for IsDevelopment" defaultvalue="__ISDEVELOPMENT__" tags=""> <parameterentry kind="XmlFile" scope="\\web.config$" match="/configuration/applicationSettings/MyApp.Properties.Settings/setting[@name='IsDevelopment']/value/text()" /> </parameter> <parameter name="WebApiBaseUrl" description="Description for WebApiBaseUrl" defaultvalue="__WEBAPIBASEURL__" tags=""> <parameterentry kind="XmlFile" scope="\\web.config$" match="/configuration/applicationSettings/MyApp.Properties.Settings/setting[@name='WebApiBaseUrl']/value/text()" /> </parameter> <parameter name="SearchFilterDelta" description="Description for SearchFilterDelta" defaultvalue="__SEARCHFILTERDELTA__" tags=""> <parameterentry kind="XmlFile" scope="\\web.config$" match="/configuration/applicationSettings/MyApp.Properties.Settings/setting[@name='SearchFilterDelta']/value/text()" /> </parameter> </parameters>

Creating a Publish Profile

Now, let’s create a publish profile that define how the web deployment package should be created. Right-click on the web application project and then select Publish. Then select the Custom option:

image

 

Since the publish profile will be used to create a web deployment package, I like to call it CreatePackage (but you are of course free to call it whatever you want)

image

On the Connection tab, select Web Deploy Package as the publish method, then give the generated package a name (including .zip).

As the Web Site name, we enter a tokenized value __WEBSITE__. This token will also end up in the MyApp.SetParameters.xml file.

image

Save the publish profile and commit and push your changes. Now it’s time to create a build definition.

 

Create a Build Definition that generates a web deploy package

I won’t go through all the details of creating a build definition in VSTS, but just focus on the relevant parts for this blog post.

To generate a web deploy package, we need to pass some magic MSBuild parameters as part of the Visual Studio build task. Since we have a publish profile that contains our settings, we need to refer to this file. We also want to specify where the resulting files should be placed.

Enter the following string in the MSBuild Arguments field:

/p:DeployOnBuild=true /p:PublishProfile=CreatePackage /p:PackageLocation=$(build.stagingDirectory)

image

 

DeployOnBuild=true is required to trigger the web deployment publishing process, and the we use the PackageLocation property to specify that the output should be places in the staging directory of the build. This will make it easy to upload the artifacts at the end of the build, like so:

image

 

This will generate an artifact called drop in the build that contains all files needed to deploy this application using MSDeploy:

image

As you can see, we have all the generated web deploy files here. We will use three of them:

MyApp.zip – The web deploy package

MyApp.SetParameters.xml – The parameterization file that contains our tokenized parameters

MyApp.Deploy.cmd – A command file that simplifies running MSDeploy with the correct parameters

 

Creating a Release Definition

Finally, we will create a release definition that deploys this web deploy package to two different environments, let’s call them Test and Prod. In each environment we need to apply the correct configuration values. To do this, we have to replace the token variables in our MyApp.SetParameters.xml file.

There is no out of the box task to do this currently, but there are already several of them in the Visual Studio Marketplace. Here, I will use the Replace Tokens task from Guillaume Rochon, available at https://marketplace.visualstudio.com/items?itemName=qetza.replacetokens. Install it to your Visual Studio Team Services account, and then the task will be available in the build/release task catalog, in the Utility category:

 

image

 

Each environment in the release definition will just contain two tasks, the first one for the token replacement and the other one for deploying the web deploy package. To do this, we just run the MyApp.deploy.cmd file that was generated by the build. Since the parameters have already been set with the correct values, we can just run this without any extra arguments.

image

 

Also, we must specify the values for each variable in the environment. Right click on the environment and the add these variables:

image

 

Tip: Create the Test environment first with all variables and tasks. Once it’s done, use the Clone environment feature to create a Prod environment, and then just replace the configuration values

 

That’s it, now you can run the release and it will deploy your web application with the correct configuration to each environment.

 

image

 

 

 

 

Talking Visual Studio ALM Extensibility at DevSum16

Last year I had a great time speaking at the DevSum conference, the biggest .NET developer conference in Sweden. Back then, I talked about moving your development to the cloud using Visual Studio Team Services. Active Solution, where I work, was a gold partner for this event and in addition to me my colleagues Alan Smith and Peter Örneholm also spoke at the conference. We had a lot of fun in our booth showing the Lego robots running on Raspberry PIs, connected to Azure for movement control and result collection.

This year I had the fortune to be selected again to speak at DevSum16, and this time I will talk about the different options around integration and extensibility of the Visual Studio ALM platform. This means that I will talk about things like Service Hooks, OAuth, REST API and UI extensibility among other things.

 

Here is the session description (http://www.devsum.se/speaker/jakob-ehn/), hope to see you there!

Don’t be locked in – Integrate and Extend the Visual Studio ALM Platform

The days when you used one tool chain for all your development are long gone. Developing modern applications today often requires a variety of tools, both 3rd party tools and services

but also homegrown systems are often used as part of the process. In the new era of Microsoft the term “Open ALM” is key, focusing on trying to build best in breed tools for software development companies, but at the same time make sure that it is open and extensible.

 

In this session we will look at the different options for integrating and extending Visual Studio TFS and Team Services. We will look at:

  • Using Service Hooks to automate workflows with other services such as Trello, GitHub and Jenkins.
  • Utilizing the REST API to automate processes in TFS
  • Extending the Web UI itself with custom extensions, from simple context menus to full-fledged custom pages and hubs. We will also look at how we can publish these extensions
    to the new Visual Studio Marketplace