Implementing Azure Container Registry Retention with Brigade

If you are doing any work related to containers and Azure, you are most likely using Azure Container Registry for storing images. The amoun of storage available for these images depends on the pricing tier you are using.
If you exceed this amount of storage, you will pay an additional fee for every GB of image data that exceeds the limit. See the table below for the current pricing details.

image_thumb[7]
Azure Container Registry pricing details

100GB of included storage might sound much, but you will pretty soon find out that your CI pipelines will fill up this space with new image versions being pushed on every commit.

Now, if you select the Premium tier, there is a retention policy feature available (https://docs.microsoft.com/en-us/azure/container-registry/container-registry-retention-policy), but the Premium tier will cost you three times as much.

Implementing purging of older images in ACR yourself is easy using the Azure CLI/Powershell, but you need some mechanism of hosting and running these scripts whenever you push a new image to your registry.

This is a perfect case for Brigade, it already comes with a Container Registry gateway that will respond to webhooks and translate that into Brigade events, and you can host everything inside your existing Kubernetes cluster.

See my introductory post on Brigade here:
https://blog.ehn.nu/2020/01/event-driven-scripting-in-kubernetes-with-brigade/

The overall solution will look like this:

image_thumb[13]

Whenever a new image is pushed the an Azure Container Registry, it will send a request to a Brigade Container Registry gateway running in your Kubernetes cluster of choice. This will in turn kick off a build from a Brigade project, that contains a script that will authenticate back to the registry and purge a selected set of older images.

The source code for the Brigade javascript pipeline, including the custom Bash script is available here:
https://github.com/jakobehn/brigade-acr-retention

Let’s go through the steps needed to get this solution up and running. If you want to, you can use the GitHub repository directly, or you’ll want to store these scripts in your own source control.

Create a Service Principal

To be able to purge images in Azure Container Registry from a Docker container running in our Brigade pipeline, we will create a service principal. This can be done by running the following command:

az ad sp create-for-rbac –name ACRRetentionPolicy
Changing “ACRRetentionPolicy2” to a valid URI of
http://ACRRetentionPolicy, which is the required format used for service principal names
{
   “appId”: “48408316-6d71-4d36-b4ea-37c63e3e063d”,
   “displayName”: “ACRRetentionPolicy”,
   “name”:
http://ACRRetentionPolicy,
   “password”: “<<EXTRACTED>>”,
   “tenant”: “<<EXTRACTED>>”
}

Make a note of the appId, password and tenantId as you will be using them later on.

Install Brigade

If you haven’t already, install Brigade in your Kubernetes cluster. Make sure to enable Brigade’s Container Registry gateway by setting the cr.enabled property to true:

helm repo add brigade https://brigadecore.github.io/charts
helm repo update
helm install -n brigade brigade/brigade –set cr.enabled=true,cr.service.type=LoadBalancer

Verify that all components of Brigade are running:

PS C:\brigade> kubectl get pods   

NAME                                            READY   STATUS    RESTARTS   AGE
brigade-server-brigade-api-58d879df79-dczl6     1/1     Running   0          8d
brigade-server-brigade-cr-gw-577f5c787b-kx2m4   1/1     Running   0          8d
brigade-server-brigade-ctrl-8658f456c4-pbkx2    1/1     Running   0          8d
brigade-server-kashti-7546c5567b-ltxqm          1/1     Running   0          8d

List the services and make a note of the public IP address of the Container Registry gateway service:

PS C:\brigade> kubectl get svc                                                                                                        
NAME                           TYPE           CLUSTER-IP     EXTERNAL-IP     PORT(S)        AGE
brigade-server-brigade-api     ClusterIP      10.0.188.158   <none>          7745/TCP       8d
brigade-server-brigade-cr-gw   LoadBalancer   10.0.6.148     40.114.186.81   80:31844/TCP   8d
brigade-server-kashti          ClusterIP      10.0.193.112   <none>          80/TCP         8d
kubernetes                     ClusterIP      10.0.0.1       <none>          443/TCP        13
d


Brigade script

Every Brigade pipeline reference a Javascript file that will respond to various events and chain the jobs together using container images. As seen below, the necessary parameters are passed in as environment variables.
The values of the variables are fetched from the secrets from the Brigade project that we’ll create in the next step.

When the image_push event is received (from the Brigade Container Registry gateway), the script creates a job, passes in the environment variables and define the tasks to be run inside the container. We are using the mcr.microsoft.com/azure-cli Docker image, which is the official image for using the Azure CLI inside a container. The task runs the purge-images.sh script, which is available in the /src folder. When a Brigade project refers to a Git repository, the source will automatically be cloned into this folder inside the container, using a Git side-car container.

const { events, Job} = require(“brigadier”);

events.on(“image_push”, async (e, p) => {
    
     var purgeStep = new Job(“purge”, “mcr.microsoft.com/azure-cli”)
     purgeStep.env = {
         subscriptionName: p.secrets.subscriptionName,
         registryName: p.secrets.registryName,
         repositoryName: p.secrets.repositoryName,
         minImagesToKeep: p.secrets.minImagesToKeep,
         spUserName: p.secrets.spUserName,
         spPassword: p.secrets.spPassword,
         spTenantId: p.secrets.spTenantId
     }
     purgeStep.tasks = [
         “cd src”,
         “bash purge-images.sh”,
       ];     
     purgeStep.run();   

  });

Script for purging images from Azure Container Registry

The logic of purging older images from the container registry is implemented in a bash script, called purge-images.sh, also located in the GitHub repository. It authenticates using the service principal, and then lists all image tags from the corresponding container registry and deletes all image except the latest X ones (configured through the minImagesToKeep environment variable).

#Login using supplied SP and select the subscription
az login –service-principal –username $spUserName –password $spPassword –tenant $spTenantId
az account set –subscription “$subscriptionName”

# Get all the tags from the supplied repository
TAGS=($(az acr repository show-tags –name $registryName –repository $repositoryName  –output tsv –orderby time_desc))
total=${#TAGS[*]}

for (( i=$minImagesToKeep; i<=$(( $total -1 )); i++ ))
do
      imageName=”$repositoryName:${TAGS[$i]}”
      echo “Deleting image: $imageName”
      az acr repository delete –name $registryName –image $imageName –yes
done

echo “Retention done”

Creating the Brigade project

To create a project in Brigade, you need the Brigade CLI. Running brig project create will take you through a wizard where you can fill out the details.
In this case, I will point it to the GitHub repository that contains the Brigade.js file and the bash script.

Here is the output:

PS C:\acr-retention-policy> brig project create                                                                                       
? VCS or no-VCS project? VCS
? Project Name jakobehn/brigade-acr-retention
? Full repository name github.com/jakobehn/brigade-acr-retention
? Clone URL (
https://github.com/your/repo.git) https://github.com/jakobehn/brigade-acr-retention.git
? Add secrets? Yes
?       Secret 1 subscriptionName
?       Value Microsoft Azure Sponsorship
? ===> Add another? Yes
?       Secret 2 registryName
?       Value jakob
? ===> Add another? Yes
?       Secret 3 repositoryName
?       Value acrdemo
? ===> Add another? Yes
?       Secret 4 minImagesToKeep
?       Value 5
? ===> Add another? Yes
?       Secret 5 spUserName
?       Value <<EXTRACTED>>
? ===> Add another? Yes
?       Secret 6 spPassword
?       Value <<EXTRACTED>>
? ===> Add another? Yes
?       Secret 7 spTenantId
?       Value <<EXTRACTED>>
? ===> Add another? No
? Where should the project’s shared secret come from? Specify my own
? Shared Secret <<EXTRACTED>>
? Configure GitHub Access? No
? Configure advanced options No
Project ID: brigade-c0e1199e88cab3515d05935a50b300214e7001610ae42fae70eb97

Setup ACR WebHook

Now we have everything setup, the only thing that is missing is to make sure that your Brigade project is kicked off every time a new image is pushed to the container registry. To do this, navigate to your Azure Container Registry and select the Webhooks tab. Create a new webhook, and point it to the IP address of your container registry gateway that you noted before. 

Note the format of the URL, read more about the Brigade container registry here: https://docs.brigade.sh/topics/dockerhub/

image_thumb[1]
Creating an ACR webhook

To only receive events from one specific repository, I have specified the Scope property and set it to acrdemo:*, which effectively filters out all other push events.

Trying it out

Let’s see if this works then, shall we? I’m pushing a new version of my demo images (jakob.azurecr.io/acrdemo:1.17) , and then run the Brigade dashboard (brig dashboard).

I can see that a build has been kicked off for my project, and the result looks like this:

image_thumb[4]

I can see that I got a image_push event and that the build contained one job called purge (that name was specified in the Javascript pipeline when creating the job). We can drill down into this job and see the output from the script that was executed:

image_thumb[5]

Since I specfied minImageToKeep to 5, the script now deleted version 1.12 (leaving the 5 latest versions in the repository).

Hope you found this valuable!


Event-driven scripting in Kubernetes with Brigade

In most projects that I’ve been part of, sooner or later the need for various types of automation jobs arises. For example cleaning up old files, moving database backups, running health checks or system tests and so on.

Historically we’ve implemented these tasks using for example the Windows task scheduler, or through some custom Windows Service app. More recently, we’ve been using Azure Automation jobs for this. Sometimes it can also make sense to use CI/CD automation tools like Azure DevOps for these jobs.

With the move to containers and Kubernetes, it can make a lot of sense to use that platform not just for the business apps that you are developing, but also for these type of automation workloads. It means that you don’t have to invest and manage another platform, and you can leverage existing and 3rd part container images to build automation workflows.

Brigade

Brigade is a platform that makes it easy to create simple or complex workflows that run on Kubernetes. You use Docker containers as the basis for each step in the workflow, and wire them together using Javascript.

Brigade

Brigade is an open-source project, read more about it at:
https://brigade.sh/

Brigade runs on any vanilla Kubernetes cluster,  you don’t need anything extra installed to run brigade pipelines.

Installing Brigade is as easy as running the following two commands:

helm repo add brigade https://brigadecore.github.io/charts
helm install brigade/brigade --name brigade-server

The image below shows the main concepts in use by Brigade:image

Project
For every automation workflow that you want to implement, you will create a project. Every project has some metadata attached to it, such as id, name and so on. It also either contains or reference the Javascript code that contains the pipeline logic.

Build
A build is created every time a script is triggered, through some  external event. The build runs until all jobs are finished, and you can view the output logs from the running build as well as after it finished.

Job
Each build will contain one or more jobs. For each job, a container instance is started, and then a series of tasks is executed inside that container. You specify the jobs and the tasks in the Javascript code, and how the jobs should be scheduled.

Gateway
A gateway transform outside triggers (a Git pull request, a Trello card move etc) into events, that is passed into the pipeline where you will handle them in your code.

Brigade comes with a Generic gateway that listens and accepts POST JSON messages on any format (it also explicitly supports the CloudEvents format). In addition, there are several custom gateways that makes integration a lot easier with services such as GitHub, Docker Container Registry or Azure Event Grid.

A basic “hello-world” type of Brigade pipeline can look like this:

const { events, Job } = require("brigadier");

//Handler for exec event
events.on("exec", () =&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;gt; {

  var job = new Job("say-hello", "alpine:3.8");
  job.tasks = [
    "echo Hello",
    "echo World"
  ];

  job.run();
  
});

Here, the pipeline is triggered by the exec event, and inside that event handler it starts a new job called “say-hello” which contains two tasks where each task just prints a message. The job is executed inside a container from the alpine:3.8 image, that will be downloaded from Dockerhub and started automatically for you. Of course you can use any public image, or a private image from your own container registry.

Brigade has excellent documentation, I encourage you to read up on it more at https://docs.brigade.sh/

In this post I will show a slightly more complex example, that is taken from a recent customer project where we developed a microservice application running on Kubernetes, and found the need for some extra automation.

Removing Kubernetes environment on PR completion

Kubernetes makes it easy to create new isolated environments for your application when you need to. A common desire of many teams is to deploy the application into a fresh environment every time a pull request is created. This lets the team and stakeholders test and verify the feature that is being developed, before it gets merged into the master branch.

Using Azure DevOps, it’s quite easy to setup a release pipeline where every PR is deployed into a new namespace in Kubernetes. You can enable stages in a pipeline to be triggered by pull requests, and then use information from that PR to create a new namespace in your Kubernetes cluster and then deploy the app into that namespace.

The problem we experienced recently at a customer with this was, how can we make sure this namespace (and everything in it) is removed once the PR is complete and merged? We can’t keep it around since that will consume all the resources eventually in the cluster, and we don’t want to rely on cleaning this up manually.

This turned out to be a perfect case for Brigade. We can configure a service hook in Azure DevOps, so that every time a PR is updated we trigger a Brigade pipeline. In the pipeline we check if the PR was completed and if so, extract the relevant information from the PR and then clean up the corresponding namespace. To do this, we used existing container images that let us run helm and kubecl commands.

The Brigade script looks like this:

const { events, Job } = require("brigadier");
const util = require('util')

const HELM_VERSION = "v2.13.0"
const HELM_CONTAINER = "lachlanevenson/k8s-helm:" + HELM_VERSION;

const KUBECTL_VERSION = "v1.12.8";
const KUBECTL_CONTAINER = "lachlanevenson/k8s-kubectl:" + KUBECTL_VERSION;

events.on("simpleevent", (event, project) =&amp;amp;amp;amp;amp;amp;amp;amp;amp;gt; {
    const payload = JSON.parse(event.payload);
    const prId = payload.resource.pullRequestId;

    if (!payload.resource.sourceRefName.includes('/feature/') &amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp; !payload.resource.sourceRefName.includes('/bug/')) {
        console.log(`The source branch ${payload.resource.sourceRefName} is not a /feature/ or /bug/ and is therefore skipped.`)
        return;
    }

    if (payload.resource.status !== "completed" &amp;amp;amp;amp;amp;amp;amp;amp;amp;amp;&amp;amp;amp;amp;amp;amp;amp;amp;amp;amp; payload.resource.status !== "abandoned") {
        console.log(`PullRequest not complete or abandoned (current status: ${payload.resource.status}).`);
        return;
    }

    var helm_job = new Job("helm-delete-release", HELM_CONTAINER);
    helm_job.env = {
        'HELM_HOST': "10.0.119.135:44134"
    };
    helm_job.tasks = ["helm init --client-only", `helm delete --purge samplewebapp-${prId}`];

    var kubectl_job = new Job("kubectl-delete-ns", KUBECTL_CONTAINER);
    kubectl_job.tasks = [`kubectl delete namespace samplewebapp-${prId}`];

    console.log("==&amp;amp;amp;amp;amp;amp;amp;amp;amp;gt; Running helm_job Job")
    helm_job.run().then(helmResult =&amp;amp;amp;amp;amp;amp;amp;amp;amp;gt; {
        console.log(helmResult.toString())

        kubectl_job.run().then(kubectlResult =&amp;amp;amp;amp;amp;amp;amp;amp;amp;gt; {
            console.log(kubectlResult.toString());
        });
    })
});

events.on("error", (e) =&amp;amp;amp;amp;amp;amp;amp;amp;amp;gt; {
    console.log("Error event " + util.inspect(e, false, null))
    console.log("==&amp;amp;amp;amp;amp;amp;amp;amp;amp;gt; Event " + e.type + " caused by " + e.provider + " cause class" + e.cause + e.cause.reason)
})

events.on("after", (e) =&amp;amp;amp;amp;amp;amp;amp;amp;amp;gt; {
    console.log("After event fired " + util.inspect(e, false, null))
});
 

This code is triggered when the “simpleevent” event is triggered. This event is handled by the generic gateway in Brigade, and can be used to send any kind of information (as a json document) to your pipeline. To trigger this event, we configure a service hook in Azure DevOps for the Pull Request updated event, and point it to the generic gateway:

SNAGHTMLae0651d[4]

The full URL looks like this:

https://brigadedemo.ehn.nu/simpleevents/v1/brigade-55cbf57f7aaeb59afa1fe4d33ca6a5a635eefe060b057c423c97a0/somesecret

The URL contains the project id and the secret that were specified when creating the project. This is how external requests is authenticated and routed to the correct brigade script.

Inside the event handler we use two different container images, the first one is for running a Helm command to delete the Kubernetes deployment. Since Helm can’t delete the namespace, we need to run a second job inside another container image that contains the Kubectl tool, where we can delete the namespace by running

kubectl delete namespace samplewebapp-${prId}`

The prId variable is parsed from the PullRequest updated event coming from Azure DevOps. We use the id of the pull request to create a unique namespace (in this case pull request with id 99 will be deployed into the samplewebapp-99 namespace).

NB: You will need to make sure that the service account for brigade have enough permission to delete the namespace. Namespaces are a cluster level resource, so it requires a higher permission compared to deleting a deployment inside a namespace. 

One easy way to do this is to assign a cluster-admin role to the brigade service account, this is not recommended for production though.

Now, when a PR is complete, our pipeline is triggered and it will delete the deployment and then the namespace.

To view the running jobs and their output, you can either use the brigade dashboard (called Kashti) by running brig dashboard or you can install the brigade terminal which will give you a similar UI but inside your favourite console.

Here is the output from the PR job in the brigade terminal:

image

It shows that two jobs were executed in this build, and you can see the images that were used and the id of each job. To see the output of each job, just return into each job:

image

Here you can see the the output of the helm job that deletes my helm deployment for the corresponding pull request.

Summary

I encourage you to take a look at Brigade, it’s easy to get started with and you can implement all sorts of automation without having to resort to other platforms and services. And although Javascript might put some people off, the power of a real programming language (compared to some DSL language) pays off when you want to implemtent something non-trivial.

If you already are using Kubernetes, why not use it for more things than your apps!

Thanks to my colleague Tobias Lolax (https://twitter.com/Tobibben) who did the original implementation of this for our customer.

Getting started with Windows Containers in Azure Kubernetes Service

Many of us have eagerly been waiting for the announcement that Microsoft made at the Build 2019 conference, Windows Containers is now in public preview in Azure Kubernetes Service! Yes, it’s in preview so we still have to wait before putting applications into production but it is definitely time to start planning and testing migrations of your Windows applications to AKS, such as full .NET Framework apps.

Containers on Windows are still not as mature as on Linux of course, but they are fully supported on Windows and it is now GA on Kubernetes since version 1.14.

NB: Read about the current limitations for Windows Server nodes pools and application workloads in AKS here
https://docs.microsoft.com/en-us//azure/aks/windows-node-limitations

In this introductory post, I will show how to create a new AKS cluster with a Windows node and then deploy an application to the cluster using Helm.

Enabling AKS Preview Features

If AKS is still in preview when you are reading this, you first need to enable the preview features before you can create a cluster with Windows nodes:

az extension add –name aks-preview

az feature register –name WindowsPreview –namespace Microsoft.ContainerService


The operation will take a while until it is completed, you can check the status by running the following command:

az feature list -o table –query “[?contains(name, ‘Microsoft.ContainerService/WindowsPreview’)].{Name:name,State:properties.state}”


When the registration state is Registered, run the following command to refresh it:

az provider register –namespace Microsoft.ContainerService

Creating an AKS Cluster with Windows nodes

When the registration of the preview feature have been completed, you can go ahead and create a cluster. Here, I’m creating a 1 node cluster since it will only be used for demo purposes. Note that it is currently not possible to create an all Windows node cluster, you have to create at least one Linux node. It is also necessary to use a network policy that uses Azure CNI .

The below command creates a one node cluster with the Azure CNI network policy, and specifies the credentials for the Windows nodesm, should you need to login to these machines. Replace <MY_PASSWORD> with your own strong password.

(Note that the commands below is executed in a Bash shell):


az group create –name k8s –location westeurope


az aks create \
    –resource-group k8s \
    –name k8s \
    –node-count 1 \
    –enable-addons monitoring \
    –kubernetes-version 1.14.0 \
    –generate-ssh-keys \
    –windows-admin-password <MY_PASSWORD> \
    –windows-admin-username azureuser \
    –enable-vmss \
    –network-plugin azure


Now we will add a new node pool that will host our Windows nodes. for that, we use the new az aks nodepool add command. Note the os-type parameter that dictates that this node pool will be used for Windows nodes.


az aks nodepool add \
  –resource-group k8s \
  –cluster-name k8s \
  –os-type Windows \
  –name npwin \
  –node-count 1 \
  –kubernetes-version 1.14.0


When the command has completes, you should see two nodes in your cluster:

kubectl get nodes

NAME                                STATUS   ROLES   AGE   VERSION
aks-nodepool1-15123610-vmss000000   Ready    agent   8d    v1.14.0
aksnpwin000000                      Ready    agent   8d    v1.14.0


Installing Helm

Even though Helm has it’s quirks, I find it very useful for packaging and deploying kubernetes applications. A new major version is currently being worked on, which will (hopefully) remove some of the major issues that exists in the current version of Helm.

Since Helm is not installed in a AKS cluster by default, we need to install it. Start by installing theHelm CLI, follow the instructions here for your platform:

https://helm.sh/docs/using_helm/#installing-helm


Before deploying Helm, we need to create a service account with proper permissions that will be used by Helms server components, called Tiller. Create the following file:

helm-rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
   name: tiller
   namespace: kube-system

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
   name: tiller
roleRef:
   apiGroup: rbac.authorization.k8s.io
   kind: ClusterRole
   name: cluster-admin
subjects:
   – kind: ServiceAccount
     name: tiller
     namespace: kube-system


Run the following command to create the service account and the cluster role binding:

kubectl apply –f helm-rbac.yaml


To deploy helm to the AKS cluster, we use the helm init command. To make sure that it ends up on a Linux node, we use the –node-selectors parameter:


helm init –service-account tiller –node-selectors “beta.kubernetes.io/os=linux”


Running helm list should just return an empty list of releases, to make sure that Helm is working properly.

Deploy an Application

Now we have an AKS cluster up and running with Helm installed, let’s deploy an application. I will once again use the QuizBox  application that me and Mathias Olausson developed for demos at conferences and workshops. To simplify the process, I have pushed the necessary images to DockerHub which means you can deploy them directly to your cluster to try this out.


The source code for the Helm chart and the application is available here: https://github.com/jakobehn/QBox


Let’s look at the interesting parts in the Helm chart. First up is the deployment of the web application. Since we are using Helm charts, we will pick the values from a separate values.yaml file at deployment time, and refer to them using the {{expression}} format.

Note also that we using the nodeSelector property here to specify that the pod should be deployed to a Windows node.


deployment-frontend.yaml

apiVersion: apps/v1beta1
kind: Deployment
metadata:
   name: frontend
spec:
   replicas: {{ .Values.frontend.replicas }}
   template:
     metadata:
       labels:
         app: qbox
         tier: frontend
     spec:
       containers:
       – name: frontend
         image: “{{ .Values.frontend.image.repository }}:{{ .Values.frontend.image.tag }}”
         ports:
         – containerPort: {{ .Values.frontend.containerPort }}
       nodeSelector:
         “beta.kubernetes.io/os”: windows
 


The deployment file for the backend API is pretty much identical:

deployment-backend.yaml

apiVersion: apps/v1beta1
kind: Deployment
metadata:
   name: backend
spec:
   replicas: {{ .Values.backend.replicas }}
   template:
     metadata:
       labels:
         tier: backend
     spec:
       containers:
       – name: backend
         image: “{{ .Values.backend.image.repository }}:{{ .Values.backend.image.tag }}”
         ports:
         – containerPort: {{ .Values.backend.containerPort }}
       nodeSelector:
         “beta.kubernetes.io/os”: windows 

Finally, we have the database. Here I am using SQL Server Express on Linux, mainly because there is no officially supported Docker image from Microsoft that will run on Windows Server 2019 (which is required by AKS, since it’s running Windows nodes on Windows Server 2019).

But this also hightlights a very interesting and powerful feature of Kubernetes and AKS, the ability to mix Windows and Linux nodes in the same cluster and even within the same applications! This means that the whole ecosystem of Linux container images is available for Windows developers as well.


deployment-db.yaml

apiVersion: apps/v1beta1
kind: Deployment
metadata:
   name: db
spec:
   replicas: {{ .Values.db.replicas }}
   template:
     metadata:
       labels:
         tier: db
     spec:
       containers:
       – name: db
         image: “{{ .Values.db.image.repository }}:{{ .Values.db.image.tag }}”
         ports:
         – containerPort: {{ .Values.db.containerPort }}  
         env:
         – name: ACCEPT_EULA
           value: “Y”
         – name: SA_PASSWORD
           valueFrom:
             secretKeyRef:
               name: db-storage
               key: password
       nodeSelector:
         “beta.kubernetes.io/os”: linux 


To deploy the application, navigate to the root directory of the helm chart (where the Values.yaml file is located) and run:

helm upgrade –install quizbox . –values .\values.yaml

This will build and deploy the Helm chart and name the release “quizbox”. Running helm status quizbox shows the status of the deployment:

helm status quizbox

LAST DEPLOYED: Fri Jun 28 14:52:15 2019
NAMESPACE: default
STATUS: DEPLOYED

RESOURCES:
==> v1beta1/Deployment
NAME      DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
backend   1        1        1           0          9s
db        1        1        1           1          9s
frontend  1        1        1           0          9s

==> v1/Pod(related)
NAME                      READY  STATUS             RESTARTS  AGE
backend-69fd59c947-77tm4  0/1    ContainerCreating  0         9s
db-74dfcdcbff-79zsp       1/1    Running            0         9s
frontend-89d4b5b4b-rqw4q  0/1    ContainerCreating  0         9s

==> v1/Secret
NAME        TYPE    DATA  AGE
db-storage  Opaque  1     10s

==> v1/Service
NAME      TYPE          CLUSTER-IP    EXTERNAL-IP  PORT(S)       AGE
qboxdb    ClusterIP     10.0.153.253  <none>       1433/TCP      9s
frontend  LoadBalancer  10.0.132.46   <pending>    80:32608/TCP  9s
qboxapi   ClusterIP     10.0.235.72   <none>       80/TCP        9s


NOTES:
Helm chart for QuizBox deployed successfully!


Wait until the status of all pods are Running and until you see an EXTERNAL-IP address for the frontend service:

image


Open a browser and navigate to the exernal IP address, in a few seconds you should see the QuizBox application running:

image


This was a very simple walkthrough on how to get started with Windows applications on Azure Kubernetes Service. Hope you found it useful, and stay tuned for more blog posts on AKS and Windows in the near future!