Introduction
Azure Kubernetes Service (AKS) is a managed container orchestration service based on the open-source Kubernetes system, which is available on the Microsoft Azure public cloud. AKS simplifies deploying a managed Kubernetes cluster in Azure by offloading the operational overhead to Azure, such as health monitoring and maintenance.
Key Concepts
Kubernetes
- Kubernetes: An open-source platform designed to automate deploying, scaling, and operating application containers.
- Cluster: A set of nodes (virtual machines) that run containerized applications.
- Node: A single machine in the Kubernetes cluster, which can be a virtual or physical machine.
- Pod: The smallest deployable unit in Kubernetes, which can contain one or more containers.
AKS Specifics
- Managed Service: Azure handles the Kubernetes control plane, including the API server, scheduler, and other core components.
- Scaling: AKS supports both manual and automatic scaling of the cluster.
- Integration: Seamless integration with other Azure services like Azure Active Directory, Azure Monitor, and Azure DevOps.
Setting Up AKS
Prerequisites
- An active Azure subscription.
- Azure CLI installed on your local machine.
Steps to Create an AKS Cluster
-
Login to Azure
az login
-
Create a Resource Group
az group create --name myResourceGroup --location eastus
-
Create an AKS Cluster
az aks create --resource-group myResourceGroup --name myAKSCluster --node-count 1 --enable-addons monitoring --generate-ssh-keys
-
Get AKS Credentials
az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
-
Verify the Cluster
kubectl get nodes
Deploying Applications on AKS
Example: Deploying a Simple Web Application
-
Create a Deployment YAML File
apiVersion: apps/v1 kind: Deployment metadata: name: my-web-app spec: replicas: 2 selector: matchLabels: app: my-web-app template: metadata: labels: app: my-web-app spec: containers: - name: my-web-app image: nginx:1.14.2 ports: - containerPort: 80
-
Apply the Deployment
kubectl apply -f deployment.yaml
-
Expose the Deployment
kubectl expose deployment my-web-app --type=LoadBalancer --name=my-service
-
Get the External IP
kubectl get service my-service
Practical Exercise
Exercise: Deploy a Multi-Container Application
-
Create a YAML file for a multi-container pod
apiVersion: v1 kind: Pod metadata: name: multi-container-pod spec: containers: - name: nginx-container image: nginx:1.14.2 ports: - containerPort: 80 - name: sidecar-container image: busybox command: ['sh', '-c', 'echo Hello from the sidecar! && sleep 3600']
-
Deploy the Pod
kubectl apply -f multi-container-pod.yaml
-
Verify the Pod is Running
kubectl get pods
-
Access the Logs of the Sidecar Container
kubectl logs multi-container-pod -c sidecar-container
Solution Explanation
- The YAML file defines a pod with two containers: an Nginx web server and a BusyBox sidecar container.
- The
kubectl apply
command deploys the pod to the AKS cluster. - The
kubectl get pods
command verifies that the pod is running. - The
kubectl logs
command retrieves the logs from the sidecar container, demonstrating inter-container communication within the same pod.
Common Mistakes and Tips
- Resource Limits: Always define resource limits for your containers to avoid resource contention.
- Health Checks: Implement liveness and readiness probes to ensure your application is running correctly.
- Namespace Usage: Use namespaces to organize and manage resources efficiently.
Conclusion
In this section, you learned about Azure Kubernetes Service (AKS), how to set up an AKS cluster, and deploy applications on it. You also practiced deploying a multi-container application and learned some best practices. In the next module, we will explore Azure Functions and how to create serverless applications on Azure.