Deploying to Azure Kubernetes Service (AKS)
In continuation of my studies on the Azure ecosystem in preparation for the Microsoft Certified: Azure Developer Associate (AZ-204) Exam, I have chosen to delve into Azure Kubernetes Service by deploying a web application consisting of a REST API and an Angular frontend. Throughout this journey, my goal is to document and share my process and findings, shedding light on the challenges and discoveries encountered along the way.
Workflow and Tooling
The deployment workflow relies on using Azure Container Registry (ACR) and Azure Kubernetes Service (AKS) to allow for storing the application components’ container images and pulling them in order to deploy the services to a Kubernetes cluster in the cloud. This combination enables streamlined deployment processes and facilitates infrastructure management through version-controlled Git repositories.
Requirements:
Development Process:
Setting Up Azure Resources with Azure CLI
First, let’s create the Azure resources we need using the Azure CLI. We’ll start by provisioning an Azure Container Registry (ACR) to store our Docker images. Execute the following commands in your terminal:
# Create Resource Group
az group create --name Webapp.Demo --location brazilsouth
# Create ACR
az acr create --resource-group Webapp.Demo --name vgwebappdemo --sku Basic
Next, we’ll create an Azure Kubernetes Service (AKS) cluster to host our application. Run the following commands:
# Create AKS Cluster
az aks create --resource-group Webapp.Demo --name webappdemo --node-count 1 --generate-ssh-keys
az aks get-credentials --resource-group Webapp.Demo --name webappdemo
kubectl config use-context webappdemo
Let’s extend our application architecture by integrating an Azure SQL Database to serve as the backend data store for our API. Execute the following commands in your terminal:
# Create a SQL server
az sql server create --name vgwebappdemodbserver --resource-group Webapp.Demo --location brazilsouth --admin-user <admin_username> --admin-password <admin_password>
# Configure firewall rules to allow Azure services and our local IP address to access the server
az sql server firewall-rule create --resource-group Webapp.Demo --server vgwebappdemodbserver --name AllowAzureServices --start-ip-address 0.0.0.0 --end-ip-address 0.0.0.0
# Create a SQL database on the server
az sql db create --resource-group Webapp.Demo --server vgwebappdemodbserver --name webappdemo --service-objective S0
The above script will create an SQL server resource and associated database under the same resource group as our container registry and Kubernetes cluster. Keep in mind that the firewall rule set by the script is not recommended for production scenarios and you should consider alternatives such as adding a virtual network rule:
With the database created, you can get a connection string and save it for configuring the API later:
Notice how the above connection string is using the SA credentials for authentication. Instead, we should access the database (using a tool like SSMS) and create a new user with the appropriate permissions:
CREATE USER [<username>] WITH PASSWORD = '<password>';
ALTER ROLE db_datareader ADD MEMBER [<username>];
ALTER ROLE db_datawriter ADD MEMBER [<username>];
Creating Dockerfiles and Building Images Locally
The next step was to create the Dockerfile for the application components. Here’s an example Dockerfile for the .NET REST API:
FROM mcr.microsoft.com/dotnet/aspnet:6.0 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build
WORKDIR /src
COPY ["Application/VictorGago.Webapp.Demo.Application.csproj", "Application/"]
RUN dotnet restore "Application/VictorGago.Webapp.Demo.Application.csproj"
COPY . .
WORKDIR "/src/Application"
RUN dotnet build "VictorGago.Webapp.Demo.Application.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "VictorGago.Webapp.Demo.Application.csproj" -c Release -o /app/publish /p:UseAppHost=false
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "VictorGago.Webapp.Demo.Application.dll"]
The UI project is an Angular single-page application. We build the application for production and serve it with Nginx:
# Stage 1: Build Angular app with Node.js
FROM node:alpine AS build
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
# Stage 2: Serve Angular app with Nginx
FROM nginx:latest
COPY --from=build /app/www /usr/share/nginx/html
COPY /nginx.local.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
You may have noticed that the build is being performed with npm run build. Although Angular supports environment-specific build configurations to be specified during build time, I decided to externalize the configuration options to a config.json file, this way abiding to the build once, deploy anywhere principle by avoiding to bake app settings in the application bundle.
To build the images using the Docker CLI, from the project’s root directory:
docker build -t vgwebappdemo.azurecr.io/victorgago/webapp-demo/ui:latest -f ./UI/Dockerfile ./UI
docker build -t vgwebappdemo.azurecr.io/victorgago/webapp-demo/api:latest -f ./API/Dockerfile ./API
Pushing Images to Azure Container Registry (ACR):
Push the images built locally directly to the ACR repository with the following commands:
az acr login --name vgwebappdemo
docker push vgwebappdemo.azurecr.io/victorgago/webapp-demo/ui:latest
docker push vgwebappdemo.azurecr.io/victorgago/webapp-demo/api:latest
We can verify the images have been successfully published by going to the Azure Portal and looking up the container registry repositories, or using the Azure CLI:
Alternatively, using the Azure CLI, we can list all repositories in the ACR resource and query the tags for each repository, verifying the tags, digest and timestamp:
az acr repository list --name vgwebappdemo
az acr manifest list-metadata -r vgwebappdemo -n victorgago/webapp-demo/api --orderby time_desc
az acr manifest list-metadata -r vgwebappdemo -n victorgago/webapp-demo/ui --orderby time_desc
Creating Kubernetes Manifests and GitOps Workflow:
With the application images hosted in ACR, the next step was to create the Kubernetes manifests for the application components. I decided to embrace a GitOps approach to streamline the deployment workflow. At its core, GitOps promotes managing infrastructure and applications through version-controlled Git repositories. With GitOps, the Kubernetes configuration resides in a Git repository, serving as the single source of truth for application infrastructure.
To implement GitOps effectively, it’s important to organize Kubernetes manifests in a structured manner. I decided to go with a base and overlay folder structure that is well suited for accommodating multiple environments and modularity across components:
webapp-demo-ui Deployment and Service
Deployment: Defines the deployment configuration for the UI component of the web application. It specifies the container image to use, exposes port 8080, and mounts a volume for the config.json file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp-demo-ui-deployment
namespace: default
labels:
app: webapp-demo-ui
spec:
replicas: 1
selector:
matchLabels:
app: webapp-demo-ui
template:
metadata:
labels:
app: webapp-demo-ui
spec:
containers:
- name: webapp-demo-ui
image: vgwebappdemo.azurecr.io/victorgago/webapp-demo/ui:latest
ports:
- containerPort: 8080
volumeMounts:
- name: config-volume
mountPath: /usr/share/nginx/html/assets/config
readOnly: false
imagePullSecrets:
- name: acr-secret
volumes:
- name: config-volume
configMap:
name: ui-config
The reason goes back to the intention of building a single image for the frontend, which is an Angular application. Because configuration is often baked into Angular apps during build time, through the usage of environment.*.ts files, I decided to externalize the settings to a JSON file that is fetched during application startup. This way, each overlay has its own config.json file, containing environment-specific configuration values:
Service: Exposes the UI deployment internally within the cluster, making it accessible to other components:
apiVersion: v1
kind: Service
metadata:
name: webapp-demo-ui
namespace: default
spec:
selector:
app: webapp-demo-ui
ports:
- protocol: TCP
port: 80
targetPort: 80
name: http
type: LoadBalancer
Using a LoadBalancer service type will make the UI component publicly available through a load balancer resource that Azure creates for us. In the Azure Portal, there will be an External IP assigned to this service:
webapp-demo-api Deployment and Service
Deployment: Specifies the deployment configuration for the API component of the web application. It defines the container image, exposes port 8080, and sets environment variables, including the database connection string.
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp-demo-api-deployment
namespace: default
labels:
app: webapp-demo-api
spec:
replicas: 1
selector:
matchLabels:
app: webapp-demo-api
template:
metadata:
labels:
app: webapp-demo-api
spec:
containers:
- name: webapp-demo-api
image: vgwebappdemo.azurecr.io/victorgago/webapp-demo/api:latest
ports:
- containerPort: 8080
env:
- name: DEFAULT_CONNECTION_STRING
value: "Server=tcp:webappdemoserver.database.windows.net,1433;Initial Catalog=webappdemo;Persist Security Info=False;User ID=webapp-demo;Password={your_password};MultipleActiveResultSets=False;Encrypt=True;TrustServerCertificate=False;Connection Timeout=30;"
imagePullSecrets:
- name: acr-secret
It is important to note that hardcoding sensitive information should always be avoided, and the above example, although functional, poses a security risk that can be addressed by alternatives such as an Azure Key Vault integration. I might revisit this in the future, when I get to the application security portion of the AZ-204 learning path.
Service: Exposes the API deployment internally within the cluster.
apiVersion: v1
kind: Service
metadata:
name: webapp-demo-api
namespace: default
spec:
selector:
app: webapp-demo-api
ports:
- protocol: TCP
port: 80
targetPort: 8080
name: http
type: LoadBalancer
Kustomization Files
Kustomization for UI and API: Specifies the resources to include in the deployment, including the deployment and service YAML files. Below is the deployment.yml file for the UI component:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
configMapGenerator:
- name: ui-config
namespace: default
files:
- config.json
resources:
- deployment.yml
- service.yml
ConfigMaps are a convenient way to store configuration data in Kubernetes, and using ConfigMapGenerator allows us to dynamically generate ConfigMaps based on the contents of files, such as config.json. By including the config.json file within the UI overlay directory, we ensure that each environment has its own configuration file.
The last step is to mount the config map at the desired location. In this case, I wanted to place in the assets/config subfolder of the application’s root folder that is being served by the nginx process:
Below is the Kustomization file for the API:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yml
- service.yml
These configurations collectively enable the deployment and configuration of the web application on Azure Kubernetes Service, in this case, consisting of a frontend application and an API that it consumes.
To validate the manifests, I ran kubectl kustomize in the base and overlay folders, making sure the specs were ready for deployment.
Creating Credential Secrets and Integrating with Kubernetes Manifests
To ensure secure access to Azure Container Registry (ACR) from Azure Kubernetes Service (AKS) cluster, we need to create a Kubernetes secret to store the ACR credentials. This secret will then be referenced in the Kubernetes manifests to enable AKS to pull the required container images.
Before creating the Kubernetes secret, let’s retrieve the ACR credentials using the Azure CLI:
# Get ACR credentials to use in acr-secret
az acr update --name vgwebappdemo --admin-enabled true
az acr credential show --name vgwebappdemo
This command will provide us with the necessary username and password to authenticate with the ACR. Now, We’ll use the retrieved credentials to create a Kubernetes secret named acr-secret.
Then, execute the following with the data extracted data from the previous command:
kubectl create secret docker-registry acr-secret --docker-server="vgwebappdemo.azurecr.io" --docker-username="<ACR_USERNAME>" --docker-password="<ACR_PASSWORD>" --docker-email="anyemail@outlook.com" --namespace=default
In the Kubernetes manifests, I referenced the acr-secret to allow AKS to pull the container images from ACR. Here’s an example snippet that enables Kubernetes to use the credentials for pulling the images from ACR:
...
spec:
...
template:
...
spec:
imagePullSecrets:
- name: acr-secret
Deploying to Azure Kubernetes Service (AKS):
Apply the Kubernetes manifests to our AKS cluster using kubectl:
kubectl apply -k .\GitOps\overlays\test
Verify the deployment and check the status of the pods:
kubectl get pods
Additionally, we can verify the state of the deployment workloads in the Azure Portal. The green checkmark next to the application component deployments means that the Kubernetes pods for those workloads have been initialized properly:
Those healthy workloads indicate that the application was successfully deployed to AKS. While we’ve already exposed the UI and API components to the outside world using external IPs, further configuring Ingress resources can provide additional benefits such as traffic routing, SSL termination, and path-based routing. Stay tuned for the next part of the journey, where we’ll explore how to enhance our deployment with Kubernetes Ingress, adding more flexibility and control to our application’s external access. In part three, we’ll delve into GitOps and FluxCD, automating our deployment workflows and ensuring version-controlled infrastructure for seamless management.
Conclusion
In this installment, we navigated through the intricacies of deploying a web application to Azure Kubernetes Service (AKS). By leveraging Azure CLI for resource provisioning, Docker CLI for image building, and kubectl for Kubernetes deployment, we successfully deployed our application to AKS. Stay tuned for more insights and discoveries as we continue to explore the vast landscape of Kubernetes.