We're really making some progress! We should have our AKS cluster running now and ready for use to start putting some resources into. If you don't, head on back to the previous article in the series and get your k8s standing up in Azure!
Continuing with Pulumi
In the prevoius article, we used Pulumi exclusively to describe what we wanted in our AKS infrastructure and we had the Pulumi CLI do all the hard work of provisioning our k8s cluster. Now we are going to ask Pulumi to do a little more work and help us get our k8s resources into the cluster.
First, let's create a new Pulumi Project and Stack to hold our resource specific configuration values, application, and history.
Let's create a folder in our infrastructure folder for the k8s deployment stack. Starting in your infra folder, run the command:
mkdir k8s && cd k8s
Now use the Pulumi CLI to build our new project with its initial stack.
pulumi new azure-typescript --secrets-provider=passphrase
This will kick off the workflow to acquire some details before it creates the stack. In my case, I answered the workflow questions with:
project name (k8s) <-- hit enter and accepted default
stack name: (dev) <-- hit enter and accepted default
Enter your passphrase to protect config/secrets: P@ssw0rd!
azure:environment: (public) <-- hit enter and accept default
azure:location: (WestUS) WestUS
This will scaffold our new project and submit the project and stack details to our Pulumi cloud. Let's open VS Code, open the index.ts and delete everything from the file.
We are going to need to add the Pulumi kubernetes SDK module to our project.
npm install @pulumi/kubernetes
Now, at the top of our empty index.ts file, we can add the following imports.
1 2
import * as pulumi from"@pulumi/pulumi"; import * as k8s from"@pulumi/kubernetes";
Getting Configuration Values
In our AKS Pulumi project/stack, we had a number of configuration values that we stored in our Pulumi service. One of those important configuration values was the kubeConfig file that contains the credentials required to connect to our k8s instance. We are now going to use the Pulumi SDK to get that kubeConfig value so that we can use it in this project.
// setup config const env = pulumi.getStack(); // reference to this stack const stackId = `dave/aks/${env}`; const aksStack = new pulumi.StackReference(stackId); const kubeConfig = aksStack.getOutput("kubeConfig"); const k8sProvider = new k8s.Provider("k8s", { kubeconfig: kubeConfig });
// output kubeConfig for debugging purposes let _ = aksStack.getOutput("kubeConfig").apply(unwrapped =>console.log(unwrapped));
If we break down this fragment of Typescript, we see that:
Get a reference to the current stack
Ask Pulumi for an object reference to our AKS stack variables. We want the kubeConfig secret from it.
Create a k8s.Provider using the acquired kubeConfig values
(Option) - Output the kubeConfig to the console for debugging
pulumi up to test your application. It should simply compile, access our aks8 stack, and output the kubeConfig!
Labels
Almost everything in k8s uses labels to perform important actions. For example, Services expose Deployments that match the selector labels. Also, you can use labels to do queries via kubectl or as filters in octant, k9s, or the Kubernetes Web UI. You'll see labels used throughout the manifests and the Pulumi code.
We will define some common label groups that we'll use throughout our application, merging them with service/deployment specific sets of labels as required. You can add to these groups as required for your situation.
Now that we have a k8s.Provider that we can use to send instructions to our AKS cluster, we need to create the Pulumi instructions to start putting resources into our k8s cluster. Starting with postgres, let's take a look at our manifest that we used when putting resources into minikube.
We're going to ignore the first resource in the manifest. Because we are using AKS, we are able to take advantage of the dynamic persistent volume mechanism that is provided. We simply need to create PersistentVolumeClaim with the correct storageClass and Azure/AKS will take care of provisioning an actual persistent volume for us and attaching it to the correct host/node.
The second resource is the PersistentVolumeClaim. The important differnce between the manifest and the Pulumi application is going to be the use of the storageClassName configuration value that is coming from our aksStack configuration. In the AKS stack, this value is set to managed-premium.
For our third resource, the actual Deployment, we mostly want to map the values from our manifest into the TypeScript/JSON notation.
Last but not least, we map the Service manifest settings that will give the postgres-dep resource some network (internal to cluster) configuration into our Pulumi application.
pulumi up to compile our application and deploy postgres into the k8s cluster!
Moving pgAdmin4 from Manifest to Pulumi
Our next step is to migrate the pgAdmin4 manifest settings into Pulumi. This is going to be mostly the same as the postgres migration. I'll point out a few differences below.
For brevity, I've already removed the PersistentVolume manifest declaration.
There are a few items that are different as we move to Pulumi because we are using Pulumi for the Azure production (eventually) environment and the operationalization of our stack. We will be using pgAdmin4 (the container, if not the application) to do database backups. The pgAdmin4 container has all of the tooling and settings required to do backups. What we need though is somewhere to put the backups. In this case, we will leverage another feature of AKS called azureFile storage provider for k8s. This bit of configuration instructs k8s, via Azure and AKS, to use a storage account file share as a volume in our pod.
1 2 3 4 5 6
name: pgAdminAzureVolumeName, // <-- This is "azure" azureFile: { secretName: azureStorageSecretName, shareName: azureFileShareName, readOnly: false }
Recall that these resources were created in the AKS project so we need these values from our other stack configuration.
Eventually, I expect we will move to a managed Azure Database for Postgres SaaS offering, so all of this may eventually disappear. For now, we'll manage it all ourselves.
pulumi up will put the pgAdmin4 application/images into our k8s cluster. With the given configuration, it will be able to connect to the postgres database.
Moving Seq from Manifest to Pulumi
The final step (for now) of moving all of our infrastructure/backend components into pulumi is the Seq instance. This will be similar to the postgres instance as it also uses a PersistentVolumeClaim to get an azureDisk attached to the VM for saving our log data.
There are two notable differences in Pulumi Application code that were not in the manifests.
First of all, we are adding a side-car container to this pod now that we are moving to Azure. This means that there are two containers running in this Pod. They are separate images (applications) but they will share the same network space and be able to access each other at the DNS localhost with the appropriate port. The side-car container we are adding is a container image called sqelf which is a product produced by datalust.co that allows use to ingest log events in the graylog message format and send them directly into Seq.
The second change is in the Service description. We need to expose the sqelf service on a UDP port so that eventually, fluentd will be able to send log event messages, in the graylog format, to the sqelf container. The squelf container then just sends it on to Seq. We'll discuss fluentd in another article.
pulumi up will get the Seq pod up and running, with 2 containers, in our k8s infrastructure.
Moving IdentityServer4 from Manifest to Pulumi
The final bit of work to get our platform moved from minikube to AKS is to move our applications. There is nothing terribly special or noteworthy from a container perspective in this mapping. None of these pods need durable storage. The only difference with these pods is that they indicate that they want to use docker-credentials secret to access the private ACR.
There is a particularly important difference for our application when it comes to networking and accessing our applications. Our k8s has a public IP address and all of our applications will eventually be accessed via a unique DNS entry, not a shared DNS entry with different ports. I imagine you could use the shared DNS/multiple ports approach, but I did not. I don't think it helps human beings understand what they are using or working on to hide the intention behind a port abstraction. So, we will give everything its own URL.
This has consequences on our initial database seed data. Once this is all deployed, we will have to go into pgAdmin4 and run a script to update our configuration. Otherwise, the authentication system won't work. The Admin needs to be able and allowed to access the STS from its hostname location so we'll need to change the STS seed data. We're also going to change the configuration that is stored in the environmental variables which are appsettings.json overrides.
I'm so glad that Pulumi is a programatic IaC model. One thing I can do here before we get to far along is move my DB connection strings (and the list) into variables and then use them where needed.
pulumi up will push all of your applications up into AKS.
Verify all the Pods are Up and Running
With our last pulumi up we should have all of our AKS infrastructure up and hosting our k8s cluster, and we should have all of our applications/services/resources from our second pulumi stack in the k8s cluster. We should now be able to go into a one of our tools and verify that the apps are running.
If you used the pulumi application in our previous post, you should have a az aks get-credentials ... in your pulumi output for that stack. It should look something like this:
1 2
az aks get-credentials --resource-group rg_identity_dev_zwus_aks ` --name aksclusterfbfa950e --context"MyProject.Identity"
You can run that command, with your specific values, and have the azure-cli append this k8s context details into your kubeConfig file. Once that is done, we can type octant on the command line and look at our pods.
Using octant (v0.12.1), we can see that all of our pods are present in the cluster.
We will click into the pgAdmin4 pod as an example in the article, but you should eventually click into all of the pods to ensure they are functioning.
Looking at the pgAdmin4 pod, we can see that it is initialized and ready! If we go down a bit further, we'll see the port forward functionality of octant waiting for us to press the button.
Once we press the button, we'll see that we now have an option to navigate to the port-forward URL.
When we click on that link, you'll see the pgAdmin4 log in screen and once you log in, you should be able to create a server entry in our list to our postgres pod. The k8s network DNS name for our postgres pod should be postgres-svc from our Service resource. Once the entry is connected, we should be able to see our postgres database!
You should work your way through all of the pods. They should all be up and running. The whole authentication/authorization system won't be working yet. We'll wait until the next article to fix that up and test it out.
Problems
When you have problems in k8s, you have to start looking in two places. First, you have to look in the logs. octant has a nice screen for looking at the log files directly on a pod. Remember, we don't have log ingestion and tooling setup for the k8s cluster yet, so we have to go look in the pods themselves. Here you can see pgAdmin4 running just fine, but if it wasn't working, you'll find clues as to why here.
Once we have all of our log ingestion infrastructure in place, we will be able to go to Seq to look log entries across the cluster, but you should always be ready to go look directly in the pods for the log entries.
And the other place you will probably want to inspect is the running pod itself. You can use the terminal tab in octant to look at various settings in your pod.
Not Quite Done Yet
This article is going to end in a bit of a "not quite working as I'd like" state. All of the pods are up and running, but our IdentityServer4 needs some configuration changes in order to work. And in order to do that, we need to be able to access our k8s publicly, give it a DNS entry (hostname), and then configure our IdentityServer4 system to allow those hostnames to interact with the STS.
Our next stop will be adding an Ingress Controller to our k8s cluster and getting all of these services publicly available.