close

aksworkshop-master-20190423.1

open

The Azure Kubernetes Workshop

Welcome to the Azure Kubernetes Workshop. In this lab, you’ll go through tasks that will help you master the basic and more advanced topics required to deploy a multi-container application to Kubernetes on Azure Kubernetes Service (AKS).

Some of the things you’ll be going through:

  • Kubernetes deployments, services and ingress
  • Deploying MongoDB using Helm
  • Azure Monitor for Containers, Horizontal Pod Autoscaler and the Cluster Autoscaler
  • Building CI/CD pipelines using Azure DevOps and Azure Container Registry
  • Scaling using Virtual Nodes, setting up SSL/TLS for your deployments, using Azure Key Vault for secrets

Prerequisites

Tools

You can use the Azure Cloud Shell accessible at https://shell.azure.com once you login with an Azure subscription. The Azure Cloud Shell has the Azure CLI pre-installed and configured to connect to your Azure subscription as well as kubectl and helm.

Azure subscription

If you have an Azure subscription

Please use your username and password to login to https://portal.azure.com.

Also please authenticate your Azure CLI by running the command below on your machine and following the instructions.

az login

If you have been given an access to a subscription as part of a lab, or you already have a Service Principal you want to use

If you have lab environment credentials similar to the below or you already have a Service Principal you will use with this workshop,

Lab environment credentials

Please then perform an az login on your machine using the command below, passing in the Application Id, the Application Secret Key and the Tenant Id.

az login --service-principal --username APP_ID --password "APP_SECRET" --tenant TENANT_ID

Kubernetes basics

There is an assumption of some prior knowledge of Kubernetes and its concepts.

If you are new to Kubernetes, start with the Kubernetes Learning Path then go through the concepts of what Kubernetes is and what it isn’t. If you are a more experienced Kubernetes developer or administrator, you may have a look at the Kubernetes best practices guide.

Application Overview

You will be deploying a customer-facing order placement and fulfillment application that is containerized and is architected for a microservice implementation.

Application diagram

The application consists of 3 components:

  • A public facing Order Capture swagger enabled API
  • A public facing frontend
  • A MongoDB database

Scoring

To add an element of competitiveness to the workshop your solutions may be evaluated using both remote monitoring and objective assessments.

Try to maximize the number of successful requests (orders) processed per second submitted to the capture order endpoint (http://<your endpoint>:80/v1/order/).

Refer to the scaling section for guidance on how to run the load test.

Teamwork

Ideally, you should work in teams. You should have been provided a team name. If not, come up with a unique name and make sure to set it in the environment variable TEAMNAME to be able to properly track your progress.

Tasks

Useful resources are provided to help you work through each task. To ensure you progress at a good pace ensure workload is divided between team members where possible. This may mean anticipating work that might be required in a later task.

Hint: If you get stuck, you can ask for help from the proctors. You may also choose to peek at the solutions.

Core tasks

You are expected to at least complete the Getting up and running section. This involves setting up a Kubernetes cluster, deploying the application containers from Docker Hub, setting up monitoring and scaling your application.

DevOps tasks

Once you’re done with the above, next would be to include some DevOps. Complete as many tasks as you can. You’ll be setting up a Continuous Integration and Continuous Delivery pipeline for your application and then using Helm to deploy it.

Advanced cluster tasks

If you’re up to it, explore configuring the Azure Kubernetes Service cluster with Virtual Nodes, enabling MongoDB replication, using HashiCorp’s Terraform to deploy AKS and your application and more.

Getting up and running

Deploy Kubernetes with Azure Kubernetes Service (AKS)

Azure has a managed Kubernetes service, AKS (Azure Kubernetes Service).

Tasks

Get the latest Kubernetes version available in AKS

Get the latest available Kubernetes version in your preferred region into a bash variable. Replace <region> with the region of your choosing, for example eastus.

version=$(az aks get-versions -l <region> --query 'orchestrators[-1].orchestratorVersion' -o tsv)

Create a Resource Group

az group create --name akschallenge --location <region>

Now you need to create the AKS cluster

Note You can create AKS clusters that support the cluster autoscaler. However, please note that the AKS cluster autoscaler is a preview feature, and enabling it is a more involved process. AKS preview features are self-service and opt-in. Previews are provided to gather feedback and bugs from our community. However, they are not supported by Azure technical support. If you create a cluster, or add these features to existing clusters, that cluster is unsupported until the feature is no longer in preview and graduates to general availability (GA).

Option 1: Create an AKS cluster without the cluster autoscaler

Create AKS using the latest version and enable the monitoring addon

  az aks create --resource-group akschallenge \
    --name <unique-aks-cluster-name> \
    --location <region> \
    --enable-addons monitoring \
    --kubernetes-version $version \
    --generate-ssh-keys

Important: If you are using Service Principal authentication, for example in a lab environment, you’ll need to use an alternate command to create the cluster with your existing Service Principal passing in the Application Id and the Application Secret Key.

az aks create --resource-group akschallenge \
  --name <unique-aks-cluster-name> \
  --location <region> \
  --enable-addons monitoring \
  --kubernetes-version $version \
  --generate-ssh-keys \
  --service-principal <application ID> \
  --client-secret "<application secret key>"
Option 2 (Preview): Create an AKS cluster with the cluster autoscaler

AKS clusters that support the cluster autoscaler must use virtual machine scale sets and run Kubernetes version 1.12.4 or later. This scale set support is in preview. To opt in and create clusters that use scale sets, first install the aks-preview Azure CLI extension using the az extension add command, as shown in the following example:

  az extension add --name aks-preview

To create an AKS cluster that uses scale sets, you must also enable a feature flag on your subscription. To register the VMSSPreview feature flag, use the az feature register command as shown in the following example:

  az feature register --name VMSSPreview --namespace Microsoft.ContainerService

It takes a few minutes for the status to show Registered. You can check on the registration status using the az feature list command:

  az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService/VMSSPreview')].{Name:name,State:properties.state}"

When ready, refresh the registration of the Microsoft.ContainerService resource provider using the az provider register command:

  az provider register --namespace Microsoft.ContainerService

Use the az aks create command specifying the --enable-cluster-autoscaler parameter, and a node --min-count and --max-count.

Note During preview, you can’t set a higher minimum node count than is currently set for the cluster. For example, if you currently have min count set to 1, you can’t update the min count to 3.

  az aks create --resource-group akschallenge \
    --name <unique-aks-cluster-name> \
    --location <region> \
    --enable-addons monitoring \
    --kubernetes-version $version \
    --generate-ssh-keys
    --enable-cluster-autoscaler
    --min-count 1
    --max-count 3

Important: If you are using Service Principal authentication, for example in a lab environment, you’ll need to use an alternate command to create the cluster with your existing Service Principal passing in the Application Id and the Application Secret Key.

az aks create --resource-group akschallenge \
  --name <unique-aks-cluster-name> \
  --location <region> \
  --enable-addons monitoring \
  --kubernetes-version $version \
  --generate-ssh-keys \
  --enable-cluster-autoscaler
  --min-count 1
  --max-count 3
  --service-principal <application ID> \
  --client-secret "<application secret key>"

Ensure you can connect to the cluster using kubectl

Note kubectl, the Kubernetes CLI, is already installed on the Azure Cloud Shell.

Authenticate

az aks get-credentials --resource-group akschallenge --name <unique-aks-cluster-name>

List the available nodes

kubectl get nodes

Resources

Deploy MongoDB

You need to deploy MongoDB in a way that is scalable and production ready. There are a couple of ways to do so.

Hints

  • Be careful with the authentication settings when creating MongoDB. It is recommended that you create a standalone username/password and database.
  • Important: If you install using Helm and then delete the release, the MongoDB data and configuration persists in a Persistent Volume Claim. You may face issues if you redeploy again using the same release name because the authentication configuration will not match. If you need to delete the Helm deployment and start over, make sure you delete the Persistent Volume Claims created otherwise you’ll run into issues with authentication due to stale configuration. Find those claims using kubectl get pvc.

Tasks

Deploy an instance of MongoDB to your cluster. The application expects a database called akschallenge

The recommended way to deploy MongoDB would be to use Helm. Helm is a Kubernetes application package manager and it has a MongoDB Helm chart that is replicated and horizontally scalable.

Note Helm is installed on the Azure Cloud Shell.

Initialize the Helm components on the AKS cluster (RBAC enabled AKS cluster, default behaviour of CLI, optional behavior from the Azure Portal)

If the cluster is RBAC enabled, you have to create the appropriate ServiceAccount for Tiller (the server side Helm component) to use.

Save the YAML below as helm-rbac.yaml or download it from helm-rbac.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system

And deploy it using

kubectl apply -f helm-rbac.yaml

Initialize Tiller (omit the --service-account flag if your cluster is not RBAC enabled)

helm init --service-account tiller
Install the MongoDB Helm chart

After you Tiller initialized in the cluster, wait for a short while then install the MongoDB chart, then take note of the username, password and endpoints created. The command below creates a user called orders-user and a password of orders-password

helm install stable/mongodb --name orders-mongo --set mongodbUsername=orders-user,mongodbPassword=orders-password,mongodbDatabase=akschallenge

Hint By default, the service load balancing the MongoDB cluster would be accessible at orders-mongo-mongodb.default.svc.cluster.local

You’ll need to use the user created in the command above when configuring the deployment environment variables.

Resources

Deploy the Order Capture API

You need to deploy the Order Capture API (azch/captureorder). This requires an external endpoint, exposing the API on port 80 and needs to write to MongoDB.

Container images and source code

In the table below, you will find the Docker container images provided by the development team on Docker Hub as well as their corresponding source code on GitHub.

Component Docker Image Source Code Build Status
Order Capture API azch/captureorder source-code Build Status

Environment variables

The Order Capture API requires certain environment variables to properly run and track your progress. Make sure you set those environment variables.

  • TEAMNAME="[YourTeamName]"
    • Track your team’s progress. Use your assigned team name.
  • CHALLENGEAPPINSIGHTS_KEY="[AsSpecifiedAtTheEvent]"
    • Application Insights key if provided by proctors. This is used to track your team’s progress. If not provided, just delete it.
  • MONGOHOST="<hostname of mongodb>"
    • MongoDB hostname.
  • MONGOUSER="<mongodb username>"
    • MongoDB username.
  • MONGOPASSWORD="<mongodb password>"
    • MongoDB password.

Hint: The Order Capture API exposes the following endpoint for health-checks: http://[PublicEndpoint]:[port]/healthz

Tasks

Provision the captureorder deployment and expose a public endpoint

Deployment

Save the YAML below as captureorder-deployment.yaml or download it from captureorder-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: captureorder
spec:
  selector:
      matchLabels:
        app: captureorder
  replicas: 2
  template:
      metadata:
        labels:
            app: captureorder
      spec:
        containers:
        - name: captureorder
          image: azch/captureorder
          imagePullPolicy: Always
          readinessProbe:
            httpGet:
              port: 8080
              path: /healthz
          livenessProbe:
            httpGet:
              port: 8080
              path: /healthz
          resources:
            requests:
              memory: "128Mi"
              cpu: "100m"
            limits:
              memory: "256Mi"
              cpu: "500m"
          env:
          - name: TEAMNAME
            value: "team-azch"
          #- name: CHALLENGEAPPINSIGHTS_KEY # uncomment and set value only if you've been provided a key
          #  value: "" # uncomment and set value only if you've been provided a key
          - name: MONGOHOST
            value: "orders-mongo-mongodb.default.svc.cluster.local"
          - name: MONGOUSER
            value: "orders-user"
          - name: MONGOPASSWORD
            value: "orders-password"
          ports:
          - containerPort: 8080

And deploy it using

kubectl apply -f captureorder-deployment.yaml
Verify that the pods are up and running
kubectl get pods -l app=captureorder

Hint If the pods are not starting, not ready or are crashing, you can view their logs using kubectl logs <pod name> and kubectl describe pod <pod name>.

Service

Save the YAML below as captureorder-service.yaml or download it from captureorder-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: captureorder
spec:
  selector:
    app: captureorder
  ports:
  - protocol: TCP
    port: 80
    targetPort: 8080
  type: LoadBalancer

And deploy it using

kubectl apply -f captureorder-service.yaml
Retrieve the External-IP of the Service

Use the command below. Make sure to allow a couple of minutes for the Azure Load Balancer to assign a public IP.

kubectl get service captureorder -o jsonpath="{.status.loadBalancer.ingress[*].ip}"

Ensure orders are successfully written to MongoDB

Send a POST request using Postman or curl to the IP of the service you got from the previous command

curl -d '{"EmailAddress": "email@domain.com", "Product": "prod-1", "Total": 100}' -H "Content-Type: application/json" -X POST http://[Your Service Public LoadBalancer IP]/v1/order

You should get back the created order ID

{
    "orderId": "5beaa09a055ed200016e582f"
}

Resources

Deploy the frontend using Ingress

You need to deploy the Frontend (azch/frontend). This requires an external endpoint, exposing the website on port 80 and needs to write to connect to the Order Capture API public IP.

Container images and source code

In the table below, you will find the Docker container images provided by the development team on Docker Hub as well as their corresponding source code on GitHub.

Component Docker Image Source Code Build Status
Frontend azch/frontend source-code Build Status

Environment variables

The frontend requires certain environment variables to properly run and track your progress. Make sure you set those environment variables.

  • CAPTUREORDERSERVICEIP="<public IP of order capture service>"

Tasks

Provision the frontend deployment

Deployment

Save the YAML below as frontend-deployment.yaml or download it from frontend-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  selector:
      matchLabels:
        app: frontend
  replicas: 1
  template:
      metadata:
        labels:
            app: frontend
      spec:
        containers:
        - name: frontend
          image: azch/frontend
          imagePullPolicy: Always
          readinessProbe:
            httpGet:
              port: 8080
              path: /
          livenessProbe:
            httpGet:
              port: 8080
              path: /
          resources:
            requests:
              memory: "128Mi"
              cpu: "100m"
            limits:
              memory: "256Mi"
              cpu: "500m"
          env:
          - name: CAPTUREORDERSERVICEIP
            value: "<public IP of order capture service>"
          ports:
          - containerPort: 8080

And deploy it using

kubectl apply -f frontend-deployment.yaml
Verify that the pods are up and running
kubectl get pods -l app=frontend

Hint If the pods are not starting, not ready or are crashing, you can view their logs using kubectl logs <pod name> and kubectl describe pod <pod name>.

Expose the frontend on a hostname

Instead of accessing the frontend through an IP address, you would like to expose the frontend over a hostname. Explore using Kubernetes Ingress with AKS HTTP Application Routing add-on to achieve this purpose.

When you enable the add-on, this deploys two components:a Kubernetes Ingress controller and an External-DNS controller.

  • Ingress controller: The Ingress controller is exposed to the internet by using a Kubernetes service of type LoadBalancer. The Ingress controller watches and implements Kubernetes Ingress resources, which creates routes to application endpoints.
  • External-DNS controller: Watches for Kubernetes Ingress resources and creates DNS A records in the cluster-specific DNS zone using Azure DNS.
Enable the HTTP routing add-on on your cluster
az aks enable-addons --resource-group akschallenge --name <unique-aks-cluster-name> --addons http_application_routing

This will take a few minutes.

Service

Save the YAML below as frontend-service.yaml or download it from frontend-service.yaml

Note Since you’re going to expose the deployment using an Ingress, there is no need to use a public IP for the Service, hence you can set the type of the service to be ClusterIP instead of LoadBalancer.

apiVersion: v1
kind: Service
metadata:
  name: frontend
spec:
  selector:
    app: frontend
  ports:
  - protocol: TCP
    port: 80
    targetPort: 8080
  type: ClusterIP

And deploy it using

kubectl apply -f frontend-service.yaml
Ingress

The HTTP application routing add-on may only be triggered on Ingress resources that are annotated as follows:

annotations:
  kubernetes.io/ingress.class: addon-http-application-routing

Retrieve your cluster specific DNS zone name by running the command below

az aks show --resource-group akschallenge --name <unique-aks-cluster-name> --query addonProfiles.httpApplicationRouting.config.HTTPApplicationRoutingZoneName -o table

You should get back something like 9f9c1fe7-21a1-416d-99cd-3543bb92e4c3.eastus.aksapp.io.

Create an Ingress resource that is annotated with the required annotation and make sure to replace <CLUSTER_SPECIFIC_DNS_ZONE> with the DNS zone name you retrieved from the previous command.

Additionally, make sure that the serviceName and servicePort are pointing to the correct values as the Service you deployed previously.

Save the YAML below as frontend-ingress.yaml or download it from frontend-ingress.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: frontend
  annotations:
    kubernetes.io/ingress.class: addon-http-application-routing
spec:
  rules:
  - host: frontend.<CLUSTER_SPECIFIC_DNS_ZONE>
    http:
      paths:
      - backend:
          serviceName: frontend
          servicePort: 80
        path: /

And create it using

kubectl apply -f frontend-ingress.yaml

Verify that the DNS records are created

View the logs of the External DNS pod

kubectl logs -f deploy/addon-http-application-routing-external-dns -n kube-system

It should say something about updating the A record. It may take a few minutes.

time="2019-02-13T01:58:25Z" level=info msg="Updating A record named 'frontend' to '13.90.199.8' for Azure DNS zone 'b3ec7d3966874de389ba.eastus.aksapp.io'."
time="2019-02-13T01:58:26Z" level=info msg="Updating TXT record named 'frontend' to '"heritage=external-dns,external-dns/owner=default"' for Azure DNS zone 'b3ec7d3966874de389ba.eastus.aksapp.io'."

You should also be able to find the new records created in the Azure DNS zone for your cluster.

Azure DNS

Browse to the public hostname of the frontend and watch as the number of orders change

Once the Ingress is deployed and the DNS records propagated, you should be able to access the frontend at http://frontend.[cluster_specific_dns_zone], for example http://frontend.9f9c1fe7-21a1-416d-99cd-3543bb92e4c3.eastus.aksapp.io

If it doesn’t work from the first trial, give it a few more minutes or try a different browser.

Orders frontend

Resources

Monitoring

You would like to monitor the performance of different components in your application, view logs and get alerts whenever your application availability goes down or some components fail.

Use a combination of the available tools to setup alerting capabilities for your application.

Tasks

Leverage integrated Azure Kubernetes Service monitoring to figure out if requests are failing, inspect logs and monitor your cluster health

If you didn’t create an AKS cluster with monitoring enabled, you can enable the add-on by running:

az aks enable-addons --resource-group akschallenge --name <unique-aks-cluster-name> --addons monitoring
  • Check the cluster utilization under load Cluster utilization

  • Identify which pods are causing trouble Pod utilization

View the live container logs

If the cluster is RBAC enabled, you have to create the appropriate ClusterRole and ClusterRoleBinding.

Save the YAML below as logreader-rbac.yaml or download it from logreader-rbac.yaml

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
   name: containerHealth-log-reader
rules:
   - apiGroups: [""]
     resources: ["pods/log"]
     verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
   name: containerHealth-read-logs-global
roleRef:
  kind: ClusterRole
  name: containerHealth-log-reader
  apiGroup: rbac.authorization.k8s.io
subjects:
   - kind: User
     name: clusterUser
     apiGroup: rbac.authorization.k8s.io

And deploy it using

kubectl apply -f logreader-rbac.yaml

If you have a Kubernetes cluster that is not configured with Kubernetes RBAC authorization or integrated with Azure AD single-sign on, you do not need to follow the steps above. Because Kubernetes authorization uses the kube-api, read-only permissions is required.

Head over to the AKS cluster on the Azure portal, click on Insights under Monitoring, click on the Containers tab and pick a container to view its live logs and debug what is going on.

Azure Monitor for Containers: Live Logs

Resources

Scaling

As popularity of the application grows the application needs to scale appropriately as demand changes. Ensure the application remains responsive as the number of order submissions increases.

Tasks

Run a baseline load test

There is a a container image on Docker Hub (azch/loadtest) that is preconfigured to run the load test. You may run it in Azure Container Instances running the command below

az container create -g akschallenge -n loadtest --image azch/loadtest --restart-policy Never -e SERVICE_IP=<public ip of order capture service>

This will fire off a series of increasing loads of concurrent users (100, 400, 1600, 3200, 6400) POSTing requests to your Order Capture API endpoint with some wait time in between to simulate an increased pressure on your application.

You may view the logs of the Azure Container Instance streaming logs by running the command below. You may need to wait for a few minutes to get the full logs, or run this command multiple times.

az container logs -g akschallenge -n loadtest

When you’re done, you may delete it by running

az container delete -g akschallenge -n loadtest

Make note of results (sample below), figure out what is the breaking point for the number of users.

Phase 5: Load test - 30 seconds, 6400 users.

Summary:
  Total:	41.1741 secs
  Slowest:	23.7166 secs
  Fastest:	0.8882 secs
  Average:	9.7952 secs
  Requests/sec:	569.1929

  Total data:	1003620 bytes
  Size/request:	43 bytes

Response time histogram:
  0.888 [1]	|
  3.171 [1669]	|■■■■■■■■■■■■■■
  5.454 [1967]	|■■■■■■■■■■■■■■■■■
  7.737 [4741]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  10.020 [3660]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  12.302 [3786]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  14.585 [4189]	|■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
  16.868 [2583]	|■■■■■■■■■■■■■■■■■■■■■■
  19.151 [586]	|■■■■■
  21.434 [151]	|■
  23.717 [7]	|

Status code distribution:
  [200]	23340 responses

Error distribution:
  [96]	Post http://23.96.91.35/v1/order: net/http: request canceled (Client.Timeout exceeded while awaiting headers)

You may use the Azure Monitor (previous task) to view the logs and figure out where you need to optimize to increase the throughtput (requests/sec), reduce the average latency and error count.

Azure Monitor container insights

Create Horizontal Pod Autoscaler

Most likely in your initial test, the captureorder container was the bottleneck. So the first step would be to scale it out. There are two ways to do so, you can either manually increase the number of replicas in the deployment, or use Horizontal Pod Autoscaler.

Horizontal Pod Autoscaler allows Kubernetes to detect when your deployed pods need more resources and then it schedules more pods onto the cluster to cope with the demand.

Save the YAML below as captureorder-hpa.yaml or download it from captureorder-hpa.yaml

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: captureorder
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: captureorder
  minReplicas: 4
  maxReplicas: 10
  targetCPUUtilizationPercentage: 50

And deploy it using

kubectl apply -f captureorder-hpa.yaml

Important For the Horizontal Pod Autoscaler to work, you MUST remove the explicit replicas: 2 count from your captureorder deployment and redeploy it and your pods must define resource requests and resource limits.

Run a load test again after applying Horizontal Pod Autoscaler

If you didn’t delete the load testing Azure Container Instance, delete it now

az container delete -g akschallenge -n loadtest

Running the load test again

az container create -g akschallenge -n loadtest --image azch/loadtest --restart-policy Never -e SERVICE_IP=<public ip of order capture service>

Observe your Kubernetes cluster reacting to the load by running

kubectl get pods -l  app=captureorder

Check if your cluster nodes needs to scale/autoscale

If your AKS cluster is not configured with the cluster autoscaler, scale the cluster nodes using the command below to the required number of nodes

az aks scale --resource-group akschallenge --name <unique-aks-cluster-name> --node-count 4

Otherwise, if you configured your AKS cluster with cluster autoscaler, you should see it dynamically adding and removing nodes based on the cluster utilization. To change the node count, use the az aks update command and specify a minimum and maximum value. The following example sets the --min-count to 1 and the --max-count to 5:

az aks update \
  --resource-group akschallenge \
  --name <unique-aks-cluster-name> \
  --update-cluster-autoscaler \
  --min-count 1 \
  --max-count 5

Note During preview, you can’t set a higher minimum node count than is currently set for the cluster. For example, if you currently have min count set to 1, you can’t update the min count to 3.

Resources

Create private highly available container registry

Instead of using the public Docker Hub registry, create your own private container registry using Azure Container Registry (ACR).

Tasks

Create an Azure Container Registry (ACR)

az acr create --resource-group akschallenge --name <unique-acr-name> --sku Standard --location eastus

Use Azure Container Registry Build to push the container images to your new registry

Note The Azure Cloud Shell is already authenticated against Azure Container Registry. You don’t need to do az acr login which also won’t work on the Cloud Shell because this requires the Docker daemon to be running.

Clone the application code on Azure Cloud Shell

git clone https://github.com/Azure/azch-captureorder.git
cd azch-captureorder

Use Azure Container Registry Build to build and push the container images

az acr build -t "captureorder:{{.Run.ID}}" -r <unique-acr-name> .

Note You’ll get a build ID in a message similar to Run ID: ca3 was successful after 3m14s. Use ca3 in this example as the image tag in the next step.

Configure your application to pull from your private registry

Before you can use an image stored in a private registry you need to ensure your Kubernetes cluster has access to that registry. There are two ways one can achieve this. You can use either method to complete this task.

  1. Grant AKS-generated Service Principal access to ACR (assumes use of AKS and ACR)
  2. Create a Kubernetes Secret
Grant AKS generated Service Principal to ACR

Authorize the AKS cluster to connect to the Azure Container Registry using the AKS generated Service Principal.

Follow the Azure docs to learn how to grant access using Azure Active Directory Service Principals.

AKS_RESOURCE_GROUP=myAKSResourceGroup
AKS_CLUSTER_NAME=myAKSCluster
ACR_RESOURCE_GROUP=myACRResourceGroup
ACR_NAME=myACRRegistry

# Get the id of the service principal configured for AKS
CLIENT_ID=$(az aks show --resource-group $AKS_RESOURCE_GROUP --name $AKS_CLUSTER_NAME --query "servicePrincipalProfile.clientId" --output tsv)

# Get the ACR registry resource id
ACR_ID=$(az acr show --name $ACR_NAME --resource-group $ACR_RESOURCE_GROUP --query "id" --output tsv)

# Create role assignment
az role assignment create --assignee $CLIENT_ID --role acrpull --scope $ACR_ID
Create a Kubernetes Secret

Create a docker-registry secret using your docker-username (service principal ID), docker-password (service principal password), and a docker-email (email address)

kubectl create secret docker-registry acr-auth --docker-server <acr-login-server> --docker-username <service-principal-ID> --docker-password <service-principal-password> --docker-email <email-address>

Update your deployment with a reference to the created secret.

spec:
  imagePullSecrets:
  - name: acr-auth
  containers:

After you grant your Kubernetes cluster access to your private registry, you can update your deployment with the image you built in the previous step.

Kubernetes is declarative and keeps a manifest of all object resources. Edit your deployment object with the updated image.

Note This is not a recommended method to edit a deployment, nor is it a best practice. However, there is value in understanding the declarative nature of Kubernetes; it’s also an opportunity to watch the scheduler do what it’s supposed to do.

From your Azure Cloud Shell run:

kubectl edit deploy

Replace the image tag with the location of the new image on Azure Container Registry. Replace <build id> with the ID you got from the message similar to Run ID: ca3 was successful after 3m14s after the build was completed.

spec:
  containers:
  - name: captureorder
    image: <unique-acr-name>.azurecr.io/captureorder:<build id>

Note Do not just copy and paste this section if you created a Kubernetes secret as there is no reference to imagePullSecrets.

Quit the editor and run kubectl get pods

If you successfully granted Kubernetes authorization to your private registry you will see one pod terminating and a new one creating. If the access to your private registry was properly granted to your cluster, your new pod should be up and running within 10 seconds.

Resources

DevOps tasks

Continuous Integration and Continuous Delivery

Your development team are making an increasing number of modifications to your application code. It is no longer feasible to manually deploy updates.

You are required to create a robust DevOps pipeline supporting CI/CD to deploy code changes.

Hint

  • The source code registries on GitHub contain an azure-pipelines.yml definition that you can use with Azure Pipelines to build the containers. This pipeline pushes the images to Docker Hub. You may need to edit it a bit if you want to do something different, like pushing to Azure Container Registry. You may also roughly follow the steps inside on your own CI/CD tool such as Jenkins.
  • Make sure you tokenize the Docker image tags in your Kubernetes YAML configuration files instead of using latest. You’ll need to set those in the build pipeline to the Build ID.
  • You may use the diagram below as guidance.

CI/CD example

Tasks

If you peek into the solutions, they’re using Azure DevOps. You may choose doing the same process on Jenkins or another CI/CD tool of your choice.

Create an Azure DevOps account

Go to https://dev.azure.com and sign-in with your Azure subscription credentials.

If this is your first time to provision an Azure DevOps account, you’ll be taken through a quick wizard to create a new organization.

Getting started with Azure DevOps

Create a project

Create a new private project, call it azch-captureorder

Create Azure DevOps project

Fork the source repositories on GitHub or import them to Azure Repos

Click on Repos then import the code of the captureorder service from the public GitHub repository located at http://github.com/Azure/azch-captureorder.git

Import repository to Azure Repos

Import repository to Azure Repos

Create build pipeline for the application Docker container

Save the YAML below as azure-pipelines.yml or download it from azure-pipelines.captureorder.yml and store it in your code repository (azch-captureorder) as azure-pipelines.yml

This simply runs docker build, docker login and docker push, tagging the image with the current BuildId.

pool:
  vmImage: 'Ubuntu 16.04'

variables:
  imageName: 'captureorder:$(Build.BuildId)'
  # define three more variables acrName, dockerId and dockerPassword in the build pipeline in UI

steps:
- script: docker build -f Dockerfile -t $(acrName).azurecr.io/$(imageName) .
  displayName: 'docker build'

- script: docker login -u $(dockerId) -p $(dockerPassword) $(acrName).azurecr.io
  displayName: 'docker login'

- script: docker push $(acrName).azurecr.io/$(imageName)
  displayName: 'docker push'

Build the code in azch-captureorder as a Docker image and push it to the Azure Container Registry you provisioned before

Setup a build using YAML pipelines

Setup build

Choose YAML as the pipeline template

Pipeline Config-As-Code

Browse to and select the azure-pipelines.yml file you created above. You may also change the agent to be Hosted Ubuntu

Select pipeline config file

Go to the “Variables” tab and create a Variable group

Go to the "Variables" tab

Define variables in your build pipeline in the web UI:

  • dockerId: The admin user name/Service Principal ID for the Azure Container Registry.
  • acrName: The Azure Container Registry name.
  • dockerPassword: The admin password/Service Principal password for Azure Container Registry.

Create variable group

Hint:

Run the build pipeline and verify that it works

Build pipeline log

Verify that the image ends up in your Azure Container Registry

Verify images in ACR

Create a new Azure DevOps Repo, for example azch-captureorder-kubernetes to hold the the YAML configuration for Kubernetes

The reason you’re creating a separate repository, is that the Kubernetes deployment configuration is a deployment artifact, which is indepenedent from your code. You may want to change how the container is deployed, which Kubernetes services are created, etc. without triggering a new container build. For this reason, having a seperate repository is the recommended way to go about this, to encourage seperation of concerns. This decouples the application code from where it runs. You build containers in one pipeline but you’re not concerned where they would be deployed. You may have multiple other repos and pipelines controlling how you deploy.

Go ahead and create a new repo and call it azch-captureorder-kubernetes. Hit the Initialize button to create a README.md file and add the .gitignore file.

Create a new repo and initialize it

In the new repository, create a folder yaml and add the required YAML files you created before for the service you’re building.

Create YAML folder

You may download the YAML files again from the links below. Make sure you store them in the yaml folder on the azch-captureorder-kubernetes repository.

Hints

  • One thing you’ll notice is that, in captureorder-deployment.yaml, you’ll need to change the image name <unique-acr-name>.azurecr.io/captureorder:##BUILD_ID##. Put in your Azure Container Registry name.
  • Also notice the ##BUILD_ID##. This is a placeholder that will get replaced further down the line by the release pipeline by the actual version being deployed.

Create build pipeline for the Kubernetes config files

Save the YAML below as azure-pipelines.yml or download it from azure-pipelines.captureorder-k8s.yml and store it in your Kubernetes config repository (azch-captureorder-kubernetes) as azure-pipelines.yml

This essentially copies the yaml folder as a build artifact. The artifact will be picked up by the Release pipeline later on for deployment to the cluster.

pool:
  vmImage: 'Ubuntu 16.04'

steps:
- task: PublishBuildArtifacts@1
  displayName: 'publish yaml folder as an artifact'
  inputs:
    artifactName: 'yaml'
    pathToPublish: 'yaml'

Similarly to how you setup the Docker images build pipeline, setup a build pipeline using YAML pipelines for azch-captureorder-kubernetes repo. Run it once you save and verify you get the yaml folder copied as a build artifact.

Create a continuous deployment pipeline

You’ll now create the CD pipeline on the azch-captureorder-kubernetes repository that triggers upon either new container images or new YAML configuration artifacts to deploy the changes to your cluster.

Configure a Service Connection so that Azure DevOps can access resources in your Azure Resource Group for deployment and configuration purposes

Create Service Connection

Pick the Azure Resource Group you’re using

Pick Azure RG for Service Connection

Create a Release Pipeline, start with an Empty template. Add an Azure Container Registry artifact as a trigger and enable the continuous deployment trigger. Make sure to configure it to point to the Azure Container Registry repository where the build pipeline is pushing the captureorder image

ACR artifact trigger

Add another Build artifact coming from the azch-captureorder-kubernetes pipeline as a trigger and enable the continuous deployment trigger. This is the trigger for changes in the YAML configuration.

Hint Make sure to pick the kubernetes build pipeline and not your main code build pipeline. Also make sure you select Latest as the default version

Build artifact trigger

Now, start adding tasks to the default stage. Make sure the agent pool is Hosted Ubuntu 1604 then add an inline Bash Script task that will do a token replacement to replace ##BUILD_ID## in the captureorder-deployment.yaml file coming from the with the actual build being released. Remember that captureorder-deployment.yaml was published as a build artifact.

You’ll want to get the Docker container tag incoming from the Azure Container Registry trigger to replace the ##BUILD_ID## token. If you named that artifact _captureorder, the build number will be in an environment variable called RELEASE_ARTIFACTS__CAPTUREORDER_BUILDNUMBER. Similarly for the other artifact _azch-captureorder-kubernetes, its build ID would be stored in RELEASE_ARTIFACTS__AZCH-CAPTUREORDER-KUBERNETES-CI_BUILDID. You can use the following inline script that uses the sed tool.

sed -i "s/##BUILD_ID##/${RELEASE_ARTIFACTS__CAPTUREORDER_BUILDNUMBER}/g" "$SYSTEM_ARTIFACTSDIRECTORY/_azch-captureorder-kubernetes-CI/yaml/captureorder-deployment.yaml"

Bash task

Add a Deploy to Kubernetes task. Configure access to your AKS cluster using the service connection created earlier.

Scroll down and check Use configuration files and use the following value $(System.DefaultWorkingDirectory)/_azch-captureorder-kubernetes-CI/yaml/captureorder-deployment.yaml or select it from the browse button.

Kubernetes task

Hint Do the same for captureorder-service.yaml and captureorder-hpa.yaml. You can right click on the Kubernetes task and clone it.

Once you’re done, you should have the tasks looking like the following.

Kubernetes task

Create a manual release and pick the latest build as the source. Verify the release runs and that the captureorder service is deployed

Create a release

Verify everything works

  1. Make a change to the application source code, commit the change and watch the pipelines build and release the new version.

  2. Make a change to the configuration (for example, change the number of replicas), commit the change and watch the pipelines update your configuration.

CI/CD release in progress

kubectl get pods
NAME                               READY   STATUS    RESTARTS   AGE
captureorder-64b49756b6-8df8p      1/1     Running   0          55s
captureorder-64b49756b6-fjk9d      1/1     Running   0          56s
captureorder-64b49756b6-rrhck      1/1     Running   0          59s
captureorder-64b49756b6-vscn7      1/1     Running   0          1m

Resources

Package your app with Helm

You spent quite a lot of time setting up the application with multiple Kubernetes config files. Wouldn’t it be nice to package your enitre application and be able to deploy it with Helm, just like you can deploy MongoDB?

Hint You may use the diagram below as guidance.

Helm example

Tasks

Package your app as a Helm chart

Consider using template variables to be able to quickly change environment variables you pass into the chart.

On your machine, make sure you’ve run helm init before.

helm init

You can create a new chart, and call it captureorder through using helm create captureorder. You can also download the pre configured Helm chart code from captureorder-chart.zip, unzip it and save it in the same repository that holds your Kubernetes config files (azch-captureorder-kubernetes).

Let’s look at the folder structure.

Helm example

So in the templates folder, you’ll find 3 files, corresponding to the 3 YAML files that you’ve used to deploy the application to Kubernetes before. The main difference is that many of the constants, like the image name, number of replicas and environment variables have been parametrized so that the actual values can be passed at deploy time through the values.yaml file or through the command line.

You’ll also find a values.yaml file with some default values.

Hint

  • You’ll need to change <unique-acr-name> to your Azure Container Registry endpoint.
  • It isn’t a secure practice to store sensitive data like passwords in the config file. The better approach in production would be to use Kubernetes Secrets.
minReplicaCount: 1
maxReplicaCount: 2
targetCPUUtilizationPercentage: 50
teamName: azch-team
appInsightKey: ""
mongoHost: "orders-mongo-mongodb.default.svc.cluster.local"
mongoUser: "orders-user"
mongoPassword: "orders-password"

image:
  repository: <unique-acr-name>.azurecr.io/captureorder
  tag: # Will be set at command runtime
  pullPolicy: Always
  
service:
  type: LoadBalancer
  port: 80

resources:
  limits:
    cpu: 100m
    memory: 128Mi
  requests:
    cpu: 100m
    memory: 128Mi

Reconfigure the build pipeline for azch-captureorder-kubernetes

Reconfigure the Kubernetes config build pipeline to copy and publish the helm folder instead as a build artifact by editing its azure-pipelines.yml file to be as the below.

pool:
    vmImage: 'Ubuntu 16.04'

steps:
- task: PublishBuildArtifacts@1
  displayName: 'publish helm folder as an artifact'
  inputs:
    artifactName: 'helm'
    pathToPublish: 'helm'

Deploying it again using Helm

Edit the Release Definition you created previously for releasing through YAML files by removing all tasks and adding a Helm task and configuring it to connect using your Service Endpoint.

  • Use the upgrade command
  • Browse to the chart and set the location to $(System.DefaultWorkingDirectory)/_azch-captureorder-kubernetes-CI/helm/captureorder
  • Browse to the values.yaml file and set the location to $(System.DefaultWorkingDirectory)/_azch-captureorder-kubernetes-CI/helm/captureorder/values.yaml
  • Set the release name field to something like ordersapi
  • Set the values field to image.tag=$(Release.Artifacts._captureorder.BuildNumber)

Release definition using helm

Validate that the release was deployed and that you can access the Orders API. You can view the release logs to get the IP.

Release using Helm is complete

Resources

Advanced tasks

The below tasks can be done in any order. You’re not expected to do all of them, pick what you’d like to try out!

Azure Kubernetes Service Virtual Nodes using ACI

To rapidly scale application workloads in an Azure Kubernetes Service (AKS) cluster, you can use Virtual Nodes. With Virtual Nodes, you have quick provisioning of pods, and only pay per second for their execution time. You don’t need to wait for Kubernetes cluster autoscaler to deploy VM compute nodes to run the additional pods.

Note

  • We will be using virtual nodes to scale out our API using Azure Container Instances (ACI).
  • These ACI’s will be in a private VNET, so we must deploy a new AKS cluster with advanced networking.

Tasks

Create a virtual network and subnet

Create a VNET

az network vnet create \
    --resource-group akschallenge \
    --name myVnet \
    --address-prefixes 10.0.0.0/8 \
    --subnet-name myAKSSubnet \
    --subnet-prefix 10.240.0.0/16

And an additional subnet

az network vnet subnet create \
    --resource-group akschallenge \
    --vnet-name myVnet \
    --name myVirtualNodeSubnet \
    --address-prefix 10.241.0.0/16

Create a service principal and assign permissions to VNET

Create a service principal

az ad sp create-for-rbac --skip-assignment

Output will look similar to below. You will use the appID and password in the next step.

{
  "appId": "7248f250-0000-0000-0000-dbdeb8400d85",
  "displayName": "azure-cli-2017-10-15-02-20-15",
  "name": "http://azure-cli-2017-10-15-02-20-15",
  "password": "77851d2c-0000-0000-0000-cb3ebc97975a",
  "tenant": "72f988bf-0000-0000-0000-2d7cd011db47"
}

Assign permissions. We will use this same SP to create our AKS cluster.

APPID=<replace with above>
PASSWORD=<replace with above>

VNETID=$(az network vnet show --resource-group akschallenge --name myVnet --query id -o tsv)

az role assignment create --assignee $APPID --scope $VNETID --role Contributor

Create the new AKS Cluster

Set the SUBNET variable to the one created above.

SUBNET=$(az network vnet subnet show --resource-group akschallenge --vnet-name myVnet --name myAKSSubnet --query id -o tsv)

Create the cluster. Replace the name with a new, unique name.

Note: You may need to validate the variables below to ensure they are all set properly.

az aks create \
    --resource-group akschallenge \
    --name <unique-aks-cluster-name> \
    --node-count 3 \
    --kubernetes-version 1.12.6 \
    --network-plugin azure \
    --service-cidr 10.0.0.0/16 \
    --dns-service-ip 10.0.0.10 \
    --docker-bridge-address 172.17.0.1/16 \
    --vnet-subnet-id $SUBNET \
    --service-principal $APPID \
    --client-secret $PASSWORD \
    --no-wait

Once completed, validate that your cluster is up and get your credentials to access the cluster.

az aks get-credentials -n <your-aks-cluster-name> -g akschallenge
kubectl get nodes
Initialize the Helm components on the AKS cluster

Assuming the cluster is RBAC enabled, you have to create the appropriate ServiceAccount for Tiller (the server side Helm component) to use.

Save the YAML below as helm-rbac.yaml or download it from helm-rbac.yaml

apiVersion: v1
kind: ServiceAccount
metadata:
  name: tiller
  namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: tiller
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
  - kind: ServiceAccount
    name: tiller
    namespace: kube-system

And deploy it using

kubectl apply -f helm-rbac.yaml

Initialize Tiller (ommit the --service-account flag if your cluster is not RBAC enabled)

helm init --service-account tiller

Enable virtual nodes

Add Azure CLI extension.

az extension add --source https://aksvnodeextension.blob.core.windows.net/aks-virtual-node/aks_virtual_node-0.2.0-py2.py3-none-any.whl

Enable the virtual node in your cluster.

az aks enable-addons \
    --resource-group akschallenge \
    --name <your-aks-cluster-name> \
    --addons virtual-node \
    --subnet-name myVirtualNodeSubnet

Verify the node is available.

kubectl get node

NAME                       STATUS   ROLES   AGE   VERSION
aks-nodepool1-30482081-0   Ready    agent   30m   v1.11.5
aks-nodepool1-30482081-1   Ready    agent   30m   v1.11.5
aks-nodepool1-30482081-2   Ready    agent   30m   v1.11.5
virtual-node-aci-linux     Ready    agent   11m   v1.13.1-vk-v0.7.4-44-g4f3bd20e-dev

Deploy MongoDB and the Capture Order API on the new cluster

Repeat the steps in the Deploy MongoDB to deploy the database on your new cluster. Repeat the steps in the Deploy Order Capture API to deploy the API on your new cluster on traditional nodes.

Create a new Capture Order API deployment targeting the virtual node

Save the YAML below as captureorder-deployment-aci.yaml or download it from captureorder-deployment-aci.yaml

Be sure to replace to environment variables in the yaml to match your environment:

  • TEAMNAME
  • CHALLENGEAPPINSIGHTS_KEY
  • MONGOHOST
  • MONGOUSER
  • MONGOPASSWORD
apiVersion: apps/v1
kind: Deployment
metadata:
  name: captureorder-aci
spec:
  selector:
      matchLabels:
        app: captureorder
  template:
      metadata:
        labels:
            app: captureorder
      spec:
        containers:
        - name: captureorder
          image: azch/captureorder
          imagePullPolicy: Always
          env:
          - name: TEAMNAME
            value: "team-azch"
          #- name: CHALLENGEAPPINSIGHTS_KEY # uncomment and set value only if you've been provided a key
          #  value: "" # uncomment and set value only if you've been provided a key
          - name: MONGOHOST
            value: "orders-mongo-mongodb.default.svc.cluster.local"
          - name: MONGOUSER
            value: "orders-user"
          - name: MONGOPASSWORD
            value: "orders-password"
          ports:
          - containerPort: 8080
        nodeSelector:
          kubernetes.io/role: agent
          beta.kubernetes.io/os: linux
          type: virtual-kubelet
        tolerations:
        - key: virtual-kubelet.io/provider
          operator: Exists
        - key: azure.com/aci
          effect: NoSchedule

Deploy it.

kubectl apply -f captureorder-deployment-aci.yaml

Note the added nodeSelector and tolerations sections that basically tell Kubernetes that this deployment will run on the Virtual Node on Azure Container Instances (ACI).

Validate ACI instances

You can browse in the Azure Portal and find your Azure Container Instances deployed.

You can also see them in your AKS cluster:

kubectl get pod -l app=captureorder

NAME                                READY   STATUS    RESTARTS   AGE
captureorder-5cbbcdfb97-wc5vd       1/1     Running   1          7m
captureorder-aci-5cbbcdfb97-tvgtp   1/1     Running   1          2m

You can scale each deployment up/down and validate each are functioning.

kubectl scale deployment captureorder --replicas=0

kubectl scale deployment captureorder-aci --replicas=5

Test the endpoint.

curl -d '{"EmailAddress": "email@domain.com", "Product": "prod-1", "Total": 100}' -H "Content-Type: application/json" -X POST http://[Your Service Public LoadBalancer IP]/v1/order

MongoDB replication using a StatefulSet

Now that you scaled the replicas running the API, maybe it is time to scale MongoDB? If you used the typical command to deploy MongoDB using Helm, most likely you deployed a single instance of MongoDB running in a single container. For this section, you’ll redeploy the chart with “replicaSet” enabled.

The authors of the MongoDB chart created it so that it supports deploying a MongoDB replica set through the use of Kubernetes StatefulSet for the secondary replica set. A replica set in MongoDB provides redundancy and high availability and in some cases, increased read capacity as clients can send read operations to different servers.

Tasks

Upgrade the MongoDB Helm release to use replication

helm upgrade orders-mongo stable/mongodb --set replicaSet.enabled=true,mongodbUsername=orders-user,mongodbPassword=orders-password,mongodbDatabase=akschallenge

Verify how many secondaries are running

kubectl get pods -l app=mongodb

You should get a result similar to the below

NAME                               READY   STATUS    RESTARTS   AGE
orders-mongo-mongodb-arbiter-0     1/1     Running   1          3m
orders-mongo-mongodb-primary-0     1/1     Running   0          2m
orders-mongo-mongodb-secondary-0   1/1     Running   0          3m

Now scale the secondaries using the command below.

kubectl scale statefulset orders-mongo-mongodb-secondary --replicas=3

You should now end up with 3 MongoDB secondary replicas similar to the below

NAME                               READY   STATUS              RESTARTS   AGE
orders-mongo-mongodb-arbiter-0     1/1     Running             3          8m
orders-mongo-mongodb-primary-0     1/1     Running             0          7m
orders-mongo-mongodb-secondary-0   1/1     Running             0          8m
orders-mongo-mongodb-secondary-1   0/1     Running             0          58s
orders-mongo-mongodb-secondary-2   0/1     Running             0          58s

Resources

Enable SSL/TLS on frontend

You want to enable connecting to the frontend website over SSL/TLS. In this task, you’ll use Let’s Encrypt free service to generate valid SSL certificates for your domains, and you’ll integrate the certificate issuance workflow into Kubernetes.

Tasks

Install cert-manager

cert-manager is a Kubernetes add-on to automate the management and issuance of TLS certificates from various issuing sources. It will ensure certificates are valid and up to date periodically, and attempt to renew certificates at an appropriate time before expiry.

Install cert-manager using Helm and configure it to use letsencrypt as the certificate issuer.

helm install stable/cert-manager --name cert-manager --set ingressShim.defaultIssuerName=letsencrypt --set ingressShim.defaultIssuerKind=ClusterIssuer --version v0.5.2

Create a Let’s Encrypt ClusterIssuer

In order to begin issuing certificates, you will need to set up a ClusterIssuer.

Save the YAML below as letsencrypt-clusterissuer.yaml or download it from letsencrypt-clusterissuer.yaml.

Note Make sure to replace _YOUR_EMAIL_ with your email.

apiVersion: certmanager.k8s.io/v1alpha1
kind: ClusterIssuer
metadata:
  name: letsencrypt
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory # production
    #server: https://acme-staging-v02.api.letsencrypt.org/directory # staging
    email: _YOUR_EMAIL_ # replace this with your email
    privateKeySecretRef:
      name: letsencrypt
    http01: {}

And apply it using

kubectl apply -f letsencrypt-clusterissuer.yaml

Issue a certificate for the frontend domain

Issuing certificates happens through creating Certificate objects.

Save the YAML below as frontend-certificate.yaml or download it from frontend-certificate.yaml.

Note Make sure to replace _CLUSTER_SPECIFIC_DNS_ZONE_ with your cluster HTTP Routing add-on DNS Zone name. Also make note of the secretName: frontend-tls-secret as this is where the issued certificate will be stored as a Kubernetes secret. You’ll need this in the next step.

apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
  name: frontend
spec:
  secretName: frontend-tls-secret
  issuerRef:
    name: letsencrypt
    kind: ClusterIssuer
  dnsNames:
  - frontend._CLUSTER_SPECIFIC_DNS_ZONE_ # replace cluster specific dns zone with your HTTP Routing DNS Zone name
  acme:
    config:
    - http01:
        ingressClass: addon-http-application-routing
      domains:
      - frontend._CLUSTER_SPECIFIC_DNS_ZONE_  # replace cluster specific dns zone with your HTTP Routing DNS Zone name

And apply it using

kubectl apply -f frontend-certificate.yaml

Update the frontend Ingress with a TLS rule

Update the existing Ingress rule for the frontend deployment with the annotation kubernetes.io/tls-acme: 'true' as well as adding a tls section pointing at Secret name where the certificate created earlier is stored.

Save the YAML below as frontend-ingress-tls.yaml or download it from frontend-ingress-tls.yaml.

Note Make sure to replace _CLUSTER_SPECIFIC_DNS_ZONE_ with your cluster HTTP Routing add-on DNS Zone name.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: frontend
  annotations:
    kubernetes.io/ingress.class: addon-http-application-routing
    kubernetes.io/tls-acme: 'true' # enable TLS
spec:
  tls:
  - hosts:
    - frontend._CLUSTER_SPECIFIC_DNS_ZONE_ # replace cluster specific dns zone with your HTTP Routing DNS Zone name
    secretName: frontend-tls-secret
  rules:
  - host: frontend._CLUSTER_SPECIFIC_DNS_ZONE_ # replace cluster specific dns zone with your HTTP Routing DNS Zone name
    http:
      paths:
      - backend:
          serviceName: frontend
          servicePort: 80
        path: /

And apply it using

kubectl apply -f frontend-ingress-tls.yaml

Verify the certificate is issued and test the website over SSL

Let’s Encrypt should automatically verify the hostname in a few seconds. Make sure that the certificate has been issued by running:

kubectl describe certificate frontend

You should get back something like:

Name:         frontend
Namespace:    default
Labels:       <none>
Annotations:  kubectl.kubernetes.io/last-applied-configuration:
                {"apiVersion":"certmanager.k8s.io/v1alpha1","kind":"Certificate","metadata":{"annotations":{},"name":"frontend","namespace":"default"},"sp...
API Version:  certmanager.k8s.io/v1alpha1
Kind:         Certificate
Metadata:
  Creation Timestamp:  2019-02-13T02:40:40Z
  Generation:          1
  Resource Version:    11448
  Self Link:           /apis/certmanager.k8s.io/v1alpha1/namespaces/default/certificates/frontend
  UID:                 c0a620ee-2f38-11e9-adae-0a58ac1f1147
Spec:
  Acme:
    Config:
      Domains:
        frontend.b3ec7d3966874de389ba.eastus.aksapp.io
      Http 01:
        Ingress Class:  addon-http-application-routing
  Dns Names:
    frontend.b3ec7d3966874de389ba.eastus.aksapp.io
  Issuer Ref:
    Kind:       ClusterIssuer
    Name:       letsencrypt
  Secret Name:  frontend-tls-secret

Verify that the frontend is accessible over HTTPS and that the certificate is valid.

Let's Encrypt SSL certificate

Note Because the captureorder service is deployed over HTTP, you may receive some browser warnings about “mixed content” or the orders might not load at all because the calls happen via JavaScript. Redeploying the captureorder service to use Ingress over TLS is left as an excercise for the reader.

Resources

Use Azure Key Vault for secrets

Kubernetes provides a primitive, secrets, which can be used to store sensitive information and later retrieve them as an environment variable or a mounted volume into memory. If you have tighter security requirements that Kubernetes secrets don’t quite meet yet, for example you want an audit trail of all interactions with the keys, or version control, or FIPs compliance, you’ll need to use an external key vault.

There are a couple of options to accomplish this including Azure Key Vault and HashiCorp Vault. In this task, you’ll use Azure Key Vault to store the MongoDB password.

The captureorder application can be configured to read the MongoDB password from either an environment variable or from the file system. This task is focused on configuring the captureorder container running in AKS to read the MongoDB password from a secret stored in Azure Key Vault using the Kubernetes FlexVolume plugin for Azure Key Vault.

Key Vault FlexVolume for Azure allows you to mount multiple secrets, keys, and certs stored in Azure Key Vault into pods as an in memory volume. Once the volume is attached, the data in it is mounted into the container’s file system in tmpfs.

Tasks

Create an Azure Key Vault

Azure Key Vault names are unique. Replace <unique keyvault name> with a unique name between 3 and 24 characters long.

az keyvault create --resource-group akschallenge --name <unique keyvault name>

Store the MongoDB password as a secret

Replace orders-password with the password for MongoDB.

az keyvault secret set --vault-name <unique keyvault name> --name mongo-password --value "orders-password"

Create Service Principal to access Azure Key Vault

The Key Vault FlexVolume driver offers two modes for accessing a Key Vault instance: Service Principal and Pod Identity. In this task, we’ll create a Service Principal that the driver will use to access the Azure Key Vault instance.

Replace <name> with a service principal name that is unique in your organization.

az ad sp create-for-rbac --name "http://<name>" --skip-assignment

You should get back something like the below, make note of the appId and password.

{
  "appId": "9xxxxxb-bxxf-xx4x-bxxx-1xxxx850xxxe",
  "displayName": "<name>",
  "name": "http://<name>",
  "password": "dxxxxxx9-xxxx-4xxx-bxxx-xxxxe1xxxx",
  "tenant": "7xxxxxf-8xx1-41af-xxxb-xx7cxxxxxx7"
}

Ensure the Service Principal has all the required permissions to access secrets in your Key Vault instance

Retrieve your Azure subscription ID and keep it.

az account show --query id --output tsv

Retrieve your Azure tenant ID and keep it.

az account show --query tenantId --output tsv

Retrieve your Azure Key Vault ID and store it in a variable KEYVAULT_ID, replacing <unique keyvault name> with your Azure Key Vault name.

KEYVAULT_ID=$(az keyvault show --name <unique keyvault name> --query id --output tsv)

Create the role assignment, replacing "http://<name>" with your service principal name that was created earlier, for example "http://sp-captureorder".

az role assignment create --role Reader --assignee "http://<name>" --scope $KEYVAULT_ID

Configure Azure Key Vault to allow access to secrets using the Service Principal you created

Apply the policy on the Azure Key Vault, replacing the <unique keyvault name> with your Azure Key Vault name, and <appId> with the appId above.

az keyvault set-policy -n <unique keyvault name> --secret-permissions get --spn <appId>

Create a Kubernetes secret to store the Service Principal created earlier

Add your service principal credentials as a Kubernetes secrets accessible by the Key Vault FlexVolume driver. Replace the <appId> and <password> with the values you got above.

kubectl create secret generic kvcreds --from-literal clientid=<appId> --from-literal clientsecret=<password> --type=azure/kv

Deploy Key Vault FlexVolume for Kubernetes into your AKS cluster

Install the KeyVault FlexVolume driver

kubectl create -f https://raw.githubusercontent.com/Azure/kubernetes-keyvault-flexvol/master/deployment/kv-flexvol-installer.yaml

To validate the installer is running as expected, run the following commands:

kubectl get pods -n kv

You should see the keyvault flexvolume pods running on each agent node:

keyvault-flexvolume-f7bx8   1/1       Running   0          3m
keyvault-flexvolume-rcxbl   1/1       Running   0          3m
keyvault-flexvolume-z6jm6   1/1       Running   0          3m

Modify the captureorder deployment to read the secret from the FlexVolume

The captureorder application can read the MongoDB password from an environment variable MONGOPASSWORD or from a file on disk at /kvmnt/mongo-password if the environment variable is not set (see code if you’re interested).

In this task, you’re going to modify the captureorder deployment manifest to remove the MONGOPASSWORD environment variable and add the FlexVol configuration.

Edit your captureorder-deployment.yaml by removing the MONGOPASSWORD from the env: section of the environment variables.

- name: MONGOPASSWORD
  value: "orders-password"

Add the below volumes definition to the configuration, which defines a FlexVolume called mongosecret using the Azure Key Vault driver. The driver will look for a Kubernetes secret called kvcreds which you created in an earlier step in order to authenticate to Azure Key Vault.

volumes:
  - name: mongosecret
    flexVolume:
      driver: "azure/kv"
      secretRef:
        name: kvcreds
      options:
        usepodidentity: "false"
        keyvaultname: <unique keyvault name>
        keyvaultobjectnames: mongo-password # Name of Key Vault secret
        keyvaultobjecttypes: secret
        resourcegroup: <kv resource group>
        subscriptionid: <kv azure subscription id>
        tenantid: <kv azure tenant id>

Mount the mongosecret volume to the pod at /kvmnt

volumeMounts:
  - name: mongosecret
    mountPath: /kvmnt
    readOnly: true

You’ll need to replace the placeholders with the values mapping to your configuration.

The final deployment file should look like so. Save the YAML below as captureorder-deployment.yaml or download it from captureorder-deployment.yaml. Make sure to replace the placeholders with values for your configuration.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: captureorder
spec:
  selector:
      matchLabels:
        app: captureorder
  replicas: 2
  template:
      metadata:
        labels:
            app: captureorder
      spec:
        containers:
        - name: captureorder
          image: azch/captureorder
          imagePullPolicy: Always
          readinessProbe:
            httpGet:
              port: 8080
              path: /healthz
          livenessProbe:
            httpGet:
              port: 8080
              path: /healthz
          resources:
            requests:
              memory: "128Mi"
              cpu: "100m"
            limits:
              memory: "256Mi"
              cpu: "500m"
          env:
          - name: TEAMNAME
            value: "team-azch"
          #- name: CHALLENGEAPPINSIGHTS_KEY # uncomment and set value only if you've been provided a key
          #  value: "" # uncomment and set value only if you've been provided a key
          - name: MONGOHOST
            value: "orders-mongo-mongodb.default.svc.cluster.local"
          - name: MONGOUSER
            value: "orders-user"
          ports:
          - containerPort: 8080
          volumeMounts:
          - name: mongosecret
            mountPath: /kvmnt
            readOnly: true
        volumes:
        - name: mongosecret
          flexVolume:
            driver: "azure/kv"
            secretRef:
              name: kvcreds
            options:
              usepodidentity: "false"
              keyvaultname: <unique keyvault name>
              keyvaultobjectnames: mongo-password # Name of Key Vault secret
              keyvaultobjecttypes: secret
              keyvaultobjectversions: ""     # [OPTIONAL] list of KeyVault object versions (semi-colon separated), will get latest if empty
              resourcegroup: <kv resource group>
              subscriptionid: <kv azure subscription id>
              tenantid: <kv azure tenant id>

And deploy it using

kubectl apply -f captureorder-deployment.yaml

Apply your changes.

Verify that everything is working

Once you apply the configuration, validate that the capture order pod loaded the secret from Azure Key Vault and that you can still process orders. You can also exec into one of the captureorder pods and verify that the MongoDB password has been mounted at /kvmnt/mongo-password

# Get the pod name.
kubectl get pod -l app=captureorder

# Exec into the pod and view the mounted secret.
kubectl exec <podname> cat /kvmnt/mongo-password

The last command will return "orders-password".

Bonus: Setup Helm Chart for Azure DevOps Build

In Azure DevOps Repos, replace the values.yaml file found in the captureorder chart created during the Helm task content with the below yaml and be sure to edit the placeholders <>.

minReplicaCount: 1
maxReplicaCount: 2
targetCPUUtilizationPercentage: 50
teamName: <your team name>
appInsightKey: ""
mongoHost: "orders-mongo-mongodb.default.svc.cluster.local"
mongoUser: "orders-user"
mongoPassword: ""

image:
  repository: <unique-acr-name>.azurecr.io/captureorder
  tag: # Will be set at command runtime
  pullPolicy: Always
  
service:
  type: LoadBalancer
  port: 80

resources:
  limits:
    cpu: 100m
    memory: 128Mi
  requests:
    cpu: 100m
    memory: 128Mi

flexVol:
  keyVaultName: <unique keyvault name> # Name of keyvault containing mongo password secret
  keyVaultSecretName: mongo-password # Name of secret container mongo password
  keyVaultResourceGroup: <kv resource group> # Name of resource group containing keyvault
  subscriptionId: <kv azure subscription id> # target subscription id
  tenantId: <kv azure tenant id> # tenant ID of subscription

Also replace the …/templates/deployment.yaml file contents with these.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: 
  labels:
    app: 
    chart: 
    release: 
    heritage: 
spec:
  selector:
      matchLabels:
        app: captureorder
  template:
      metadata:
        labels:
          app: 
          release: 
      spec:
        containers:
        - name: 
          image: ":"
          imagePullPolicy: 
          readinessProbe:
            httpGet:
              port: 8080
              path: /healthz
          livenessProbe:
            httpGet:
              port: 8080
              path: /healthz
          resources:

          env:
          - name: TEAMNAME
            value: 
          - name: MONGOHOST
            value: 
          - name: MONGOUSER
            value: 
          - name: MONGOPASSWORD
            value: 
          ports:
          - containerPort: 80
          volumeMounts:
          - name: mongosecret
            mountPath: /kvmnt
            readOnly: true
        volumes:
        - name: mongosecret
          flexVolume:
            driver: "azure/kv"
            secretRef:
              name: kvcreds
            options:
              usepodidentity: "false"
              keyvaultname: 
              keyvaultobjectnames: 
              keyvaultobjecttypes: secret
              keyvaultobjectversions: ""
              resourcegroup: 
              subscriptionid: 
              tenantid: 

Save the file, this should trigger a build and redeploy. Verify everything works as above.

Resources

Terraform

Deploying the Azure Infrastructure and the container workload may seem like two different problems, but they are both infrastructure deployment concerns. Wouldn’t it be great to be able to use one tool to recreate the entire environment?

Learn more about Terraform by going to https://www.terraform.io/intro/index.html

Tasks

Use Terraform to deploy AKS

Deploy a Helm chart to new AKS cluster

Once you have the AKS cluster deployed, you can also use Terraform to deploy Helm charts https://www.terraform.io/docs/providers/helm/index.html

Hint Create a seperate folder for this Terraform code. Terraform treats each folder entirely seperate, and this should simplify dependencies.

Clean up

Once you’re done with the workshop, make sure to delete the resources you created. You can read through manage Azure resources by using the Azure portal or manage Azure resources by using Azure CLI for more details.

Proctor notes

Creating the Challenge Application Insights resource

You can quickly create the Application Insights resource required to track the challenge progress and plot the results by running the below code:

az resource create \
    --resource-group akschallenge \
    --resource-type "Microsoft.Insights/components" \
    --name akschallengeproctor \
    --location eastus \
    --properties '{"Application_Type":"web"}'  

Note Provide the Instrumentation Key to the attendees that they can use to fill in the CHALLENGEAPPINSIGHTS_KEY environment variables. If you’re using the default built in Application Insights, just instruct attendees to delete the environment variable from their deployments.

Throughput scoring

On the Azure Portal, navigate to the Application Insights resource you created, click on Analytics

Click on Analytics

Then you can periodically run the query below, to view which team has the highest successful requests per second (RPS).

requests
| where success == "True"
| where customDimensions["team"] != "team-azch"
| summarize rps = count(id) by bin(timestamp, 1s), tostring(customDimensions["team"])
| summarize maxRPS = max(rps) by customDimensions_team
| order by maxRPS desc
| render barchart

Bar chart of requests per second

Contributors

The following people have contributed to this workshop, thanks!