As Kubernetes continues to dominate the container orchestration space in the technology industry, infrastructure automation products are all expanding their feature sets to include a method to connect to and configure Kubernetes so it can be managed along with the other resources you are already managing with your choice of automation tool.
As a leading infrastructure-as-code product, Terraform has a connector called the Kubernetes provider. Let’s take a look at what you can do with it.
Can the Kubernetes provider build a cluster?
No, the provider will not build and deploy a Kubernetes cluster. The Kubernetes provider requires a cluster to be up and running before you can use it. If you are wanting Terraform to build and deploy a Kubernetes cluster in the public cloud, you would use the cloud-specific provider. An example would be using the Terraform Provider from UpCloud.
Simple scenario for the Terraform Kubernetes provider
A simple but common use case for interacting with Kubernetes using the Terraform Kubernetes provider is to create a namespace, deploying an application as a pod, and then exposing it as a service. Below are examples of the configuration that you would use for Terraform.
Step 1: Configure the provider
The easiest way to handle credentials is to create a default kube configuration pointing to your cluster. The default location for this file is ~/.kube/config.
If you want to have it included in your Terraform instance, it would look like this:
provider "kubernetes" { host = "https://1.2.3.4" username = "AccountNameWithAccess" password = "GuessMe!" }
Step 2: Deploy NGINX in a pod
resource "kubernetes_pod" "nginx" { metadata { name = "nginx-example" labels { App = "nginx" } } spec { container { image = "nginx:1.15.2" name = "example" port { container_port = 80 } } } }
Step 3: Create a service to expose NGINX externally
resource "kubernetes_service" "nginx" { metadata { name = "nginx-example" } spec { selector { App = "${kubernetes_pod.nginx.metadata.0.labels.App}" } port { port = 80 target_port = 80 } type = "LoadBalancer" } }
You will also want the Terraform apply command to output the load balancer’s IP, so add this to the bottom of the script before running it. If you replace both instances of “ip” with “hostname”, it will provide a hostname if one is set.
output "lb_ip" { value = "${kubernetes_service.nginx.load_balancer_ingress.0.ip}" }
Any configuration that is passed into the container instances uses a config_map, which is not the best way to treat things that are sensitive. If the container instances need to have sensitive information that you don’t want to expose to the entire cluster in order to successfully operate, then the Kubernetes provider has a mechanism to handle secrets. These secrets are most often certificates, API Keys, and credentials used to access services.
Persistent storage
The scenario above does not define any persistent volumes—It is strictly ephemeral storage. In reality, many applications (like databases) require data to persist between restarts of a container runtime.
The Kubernetes provider has all the functionality required to create a storage class to define where persistent volumes can be created, and the ability to claim those volumes.
Scaling and quotas
Kubernetes’ largest strength is handling the orchestration of multiple instances of an application across multiple nodes. To manage these capabilities, there are multiple resources available.
The Horizontal Pod Autoscaler sets the minimum and maximum number of replicas that should be run of any given pod. The Kubernetes cluster will scale based on CPU usage of containers.
There is a resource to activate the Replication Controller, which will enforce the rules as they have been defined. (Too many pods and some will be killed, too few and some will be started.) Typically, the replication controller regularly runs against the cluster as is, but there are instances where you want immediate action, like when you increase the maximum number via the autoscaler resource.
Limits and quotas can be used to limit the amount of resources that are consumed by individual namespaces. These resources are memory, CPU, and disk. Once there are limits in place, all pods that are started within that namespace will need to fit in within the defined quota.
Conclusion
The Terraform Kubernetes provider provides all the features necessary to manage all the Kubernetes clusters in your environment—across as many cloud providers as you want.