In this blog post, we’re going to work on deploying a SQL Server Availability Group in a Kubernetes Cluster in on-premises virtual machines. I’m going to walk you through the process as it’s documented by Microsoft at this link here . This document is very good but only shows you how to do it in Azure, we’re going to do it in VMs. I’m going to follow Microsoft’s documentation as much as possible, deviating only when necessary for on-premises deployments. I’m also going to explain the key Kubernetes concepts that you need to know to understand how all these pieces together. This is a long one, buckle up.
Creating Your ClusterIn my last blog post, I showed you how to create a three-node Kubernetes cluster. I’m going to use the same setup for this blog post. Check out how to create this cluster at this link here . There’s a reason I wrote that post first :)
Process OverviewHere’s the big picture of all the steps we will perform in this demonstration
Create a Namespace Create Secrets Create a Storage Class and mark it default Create Persistent Volumes Create a ServiceAccount, ClusterRole, ClusterRoleBinding and a Deployment Deploy the SQL Server Pods Expose the Availability Group Services as a Kubernetes Service Connect to our Availability Group from outside our cluster Create a databaseI’m going to be running these commands from the Cluster Master, you can run these from any host that can talk to your API Server . We’re going to execute the first few commands using kubectl directly then we’re going to switch to loading our configurations from “deployment manifests” which describe our configuration in YAML files and in this case, describe the Availability Group Deployment.
Create a NamespaceFirst up, we’ll create a Namespace . In Kubernetes you can use namespaces to put boundaries around resources for organizational or security reasons.
demo@k8s-master1 : ~/ag $ kubectl create namespace ag1
namespace/ag1 created
demo@k8s-master1 : ~/ag $ kubectl get namespaces
NAME STATUS AGE
ag1 Active 11m
default Active 28h
kube-public Active 28h
kube-system Active 28h
Create SecretsIn Kubernetes, the cluster store can hold Secrets …in other words sensitive data like passwords. This is valuable because we don’t want to store this information in our containers and we certainly don’t want to have our passwords as clear text in our deployment manifests. So in those manifests, we can reference these values and then upon deployment the Pods will retrieve the secrets when they’re started and pass the secret into the container for the application to use. In this case, we’re passing creating two secrets. The first is the SA password we’ll use for our SQL Server Instance, the second is the password for our Service Master Key which is behind the certificates that are used to authenticate the Availability Group (*cough*) Database Mirroring) endpoints.
Let’s create the secrets with kubectl .
demo@k8s-master1 : ~/ag $ kubectl create secret generic sql-secrets from-literal=sapassword=”1-S0methingS@Str0ng” from-literal=masterkeypassword=”2-S0methingS@Str0ng” namespace ag1
secret/sql-secrets created
Want to know how to read a secret out of the cluster store?
Well here you go, just change the masterkeypassword string to the name of the secret you want to decode.
demo@k8s-master1 : ~/ag $ kubectl get secret sql-secrets -o yaml namespace ag1 | grep masterkeypassword | awk ‘{ print $2 }’ | base64 decode
2-S0methingS@Str0ngdemo@k8s-master1 : ~/ag $ Create a Storage Class and Mark it Default
OK, now it’s time to work on our storage configuration. This is where we’re going to deviate most from the Microsoft documentation. We’re going to start off by creating a StorageClass . A StorageClass is a way to group storage together based on the storage’s attributes. In this case, we’re going to use a StorageClass because later on in the YAML files provided by Microsoft they are going to make storage requests of the cluster referencing the default storage class in the namespace. So that’s what this code does, creates the local-storage StorageClass and marks it as default.
Here is the YAML definition of our StorageClass, save this into a file called StorageClass.yaml.
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: local-storage annotations: { "storageclass.kubernetes.io/is-default-class" : "true" } provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer File 1 StorageClass.yamlOnce you have that code saved into StorageClass.yaml , go ahead and run the following command to pass the StorageClass manifest to the API server for it to create the resource for you.
demo@k8s-master1 : ~/ag $ kubectl apply -f StorageClass.yaml namespace ag1
storageclass.storage.k8s.io/local-storage unchangeddemo@k8s-master1 : ~/ag $ kubectl get storageclass namespace ag1
NAME PROVISIONER AGE
local-storage (default) kubernetes.io/no-provisioner 3h20m
Create Persistent VolumesNext up is creating our actual storage. In Kubernetes, the cluster provides storage to Pods and the Pods request the storage. Cluster storage can be many different types . You can have NFS, virtual disks from your cloud provider, local storage and many more. Local storage is the storage that’s attached to the Kubernetes Node itself. In this demonstration, we’re going to use local storage. Since we’re deploying Availability Groups that will put a Pod on each Node in our cluster, we’re going to need to define three PersistentVolumes , one on each Node.. Looking at the code in File 2 PV.yaml (below) you will see three PersistentVolumes each with a different name, all pointing to /var/opt/mssql . This will be local on each Node in our cluster. So we will need to make a /var/opt/mssql directory on each node in our cluster. So go ahead and do that now.
In an effort to loosely couple Pods