​HashiCorp Terraform​​​ is an open source tool that enables users to provision any infrastructure using a consistent workflow. While Terraform can manage infrastructure for both public and private cloud services, it can also manage external services like ​​GitHub​​​, ​​Nomad​​​, or ​​Kubernetes​​ pods. This post highlights the new Terraform Kubernetes provider which enables operators to manage the lifecycle of Kubernetes resources using declarative infrastructure as code.

Terraform enables provisioning infrastructure and infrastructure resources through an extensible ecosystem of providers (plugins). In addition to explaining the benefits of using Terraform to manage Kubernetes resources versus the Kubernetes CLI, this post also walks through using new ​​Kubernetes provider​​ to interact with Kubernetes resources (pods, replication controllers, and services) and enabling operators to control the lifecycle of Kubernetes resources using infrastructure as code.

Why Terraform?

Q: Why would I use Terraform to manage Kubernetes resources as infrastructure as code?

Terraform uses the same declarative syntax to provision the lower underlying infrastructure (compute, networking, and storage) and scheduling (application) layer. Using graph theory, Terraform models the relationships between all dependencies in your infrastructure automatically. This same graph enables Terraform to automatically detect drift as resources (like compute instances or Kubernetes pods) change over time. This drift is presented to the user for confirmation as part of the Terraform dry-run planning phase.

Terraform provides full lifecycle management of Kubernetes resources including creation and deletion of pods, replication controllers, and services.

Because Terraform understands the relationships between resources, it has an inherent understanding of the order of operations and failure conditions for creating, updating, and deleting resources. For example, if a persistent volume claim (PVC) requires space from a particular persistent volume (PV), Terraform automatically knows to create the PV before the PVC. If the PV fails to create, Terraform will not attempt to create the PVC, since Terraform knows the creation will fail.

Unlike the ​​kubectl​​ CLI, Terraform will wait for services to become ready before creating dependent resources. This is useful when you want to guarantee state following the command's completion. As a concrete example of this behavior, Terraform will wait until a service is provisioned so it can add the service's IP to a load balancer. No manual processes necessary!

Getting started with the Kubernetes provider

This post assumes you already have a Kubernetes cluster up and running, and that the cluster is accessible from the place where Terraform is running. Terraform can provision a Kubernetes cluster, but that is not specifically discussed in this post. The easiest way to configure the Kubernetes provider is to create a configuration file at ​​~/.kube/config​​ – Terraform will automatically load that configuration during its run:

# main.tf
provider "kubernetes" {}</code></pre>

When it is not feasible to create a configuration file, you can configure the provider directly in the configuration or via environment variables. This is useful in CI systems or ephemeral environments that change frequently.

After specifying the provider, initialize Terraform. This will download and install the latest version of the Terraform Kubernetes provider.

$ terraform init

Initializing provider plugins...
- Downloading plugin for provider "kubernetes"...

Terraform has been successfully initialized!

Scheduling a Simple Application

At the core of a Kubernetes application is ​​the pod​​. A pod consists of one or more containers which are scheduled on cluster nodes based on CPU or memory being available.

Next, use Terraform to create a pod with a single container running nginx, exposing port 80 to the user through the load balancer. By adding labels, Kubernetes can discover all pods (instances) to route traffic to the exposed port automatically.

resource "kubernetes_pod" "echo" {
metadata {
name = "echo-example"
labels {
App = "echo"
} }
spec {
container {
image = "hashicorp/http-echo:0.2.1"
name = "example2"
args = ["-listen=:80", "-text='Hello World'"]
port {
container_port = 80
} } } }

The above is an example and does not represent best practices. In production scenarios, you would run more than one instance of your application for high availability.

To expose the pod to end users, provision a ​​service​​. A service is capable of provisioning a load-balancer in some cloud providers. It can manage the relationship between pods and the load balancer while new pods are launched.

resource "kubernetes_service" "echo" {
metadata {
name = "echo-example"
}
spec {
selector {
App = "${kubernetes_pod.echo.metadata.0.labels.App}"
}
port {
port = 80
target_port = 80
}
type = "LoadBalancer"
} }

output "lb_ip" {
value = "${kubernetes_service.echo.load_balancer_ingress.0.ip}"
}

In addition to specifying service, this Terraform configuration also specifies an output. This output is displayed at the end of the Terraform apply and will print the IP of the load balancer making it easily accessible for either operator (human) or any tools/scripts that need it.

The plan provides an overview of the actions Terraform plans to take. In this case, you will see two resources (one pod + one service) in the output. As the number of infrastructure and application resources expands, the ​​terraform plan​​​ command becomes useful for understanding impact and rollout effect during updates and changes. Run ​​terraform plan​​ now:

$ terraform plan

# ...

+ kubernetes_pod.echo
metadata.#: "1"
metadata.0.generation: "<computed>"
metadata.0.labels.%: "1"
metadata.0.labels.App: "echo"
metadata.0.name: "echo-example"
metadata.0.namespace: "default"
spec.#: "1"
spec.0.container.#: "1"
spec.0.container.0.args.#: "2"
spec.0.container.0.args.0: "-listen=:80"
spec.0.container.0.args.1: "-text='Hello World'"
spec.0.container.0.image: "hashicorp/http-echo:latest"
spec.0.container.0.image_pull_policy: "<computed>"
spec.0.container.0.name: "example2"
spec.0.container.0.port.#: "1"
spec.0.container.0.port.0.container_port: "80"
...

+ kubernetes_service.echo
load_balancer_ingress.#: "<computed>"
metadata.#: "1"
metadata.0.generation: "<computed>"
metadata.0.name: "echo-example"
metadata.0.namespace: "default"
metadata.0.resource_version: "<computed>"
metadata.0.self_link: "<computed>"
metadata.0.uid: "<computed>"
spec.#: "1"
spec.0.cluster_ip: "<computed>"
spec.0.port.#: "1"
spec.0.port.0.node_port: "<computed>"
spec.0.port.0.port: "80"
spec.0.port.0.protocol: "TCP"
spec.0.port.0.target_port: "80"
spec.0.selector.%: "1"
spec.0.selector.App: "echo"
spec.0.session_affinity: "None"
spec.0.type: "LoadBalancer"

Plan: 2 to add, 0 to change, 0 to destroy.

The ​​terraform plan​​​ command never modifies resources; it is purely a dry-run. To apply these changes, run ​​terraform apply​​​. This command will create resources (via API), handle ordering, failures, and conditionals. Additionally, ​​terraform apply​​​ will block until all resources have finished provisioning. Run ​​terraform apply​​ now:

$ terraform apply

kubernetes_pod.echo: Creating...
...
kubernetes_pod.echo: Creation complete (ID: default/echo-example)
kubernetes_service.echo: Creating...
...
kubernetes_service.echo: Still creating... (10s elapsed)
kubernetes_service.echo: Still creating... (20s elapsed)
kubernetes_service.echo: Still creating... (30s elapsed)
kubernetes_service.echo: Still creating... (40s elapsed)
kubernetes_service.echo: Creation complete (ID: default/echo-example)

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

# ...

Outputs:

lb_ip = 35.197.9.247

To verify the application is running, use ​​curl​​ from your terminal:

$ curl -s $(terraform output lb_ip)

If everything worked as expected, you will see the text ​​hello world​​.

The ​​Kubernetes UI​​ provides another way to check both the Pod and the Service are there once they are scheduled.

Updating Application

Over time the need to deploy a new version of your application comes up. The easiest way to perform an upgrade when you deploy a new version is to change ​​image​​ field in the config accordingly.

resource "kubernetes_pod" "example" {
# ...

spec {
container {
image = "hashicorp/http-echo:0.2.3"
# ...
}

To verify the changes Terraform will make, run ​​terraform plan​​ and inspect the output. This will also verify that no one else on the team modified the resource created earlier.

$ terraform plan
Refreshing Terraform state in-memory prior to plan...

kubernetes_pod.echo: Refreshing state... (ID: default/echo-example)
kubernetes_service.echo: Refreshing state... (ID: default/echo-example)

...

~ kubernetes_pod.echo
spec.0.container.0.image: "hashicorp/http-echo:0.2.1" => "hashicorp/http-echo:0.2.3"

Plan: 0 to add, 1 to change, 0 to destroy.

Then apply the changes:

$ terraform apply

kubernetes_pod.echo: Refreshing state... (ID: default/echo-example)
kubernetes_service.echo: Refreshing state... (ID: default/echo-example)
kubernetes_pod.echo: Modifying... (ID: default/echo-example)
spec.0.container.0.image: "hashicorp/http-echo:0.2.1" => "hashicorp/http-echo:0.2.3"
kubernetes_pod.echo: Modifications complete (ID: default/echo-example)

Apply complete! Resources: 0 added, 1 changed, 0 destroyed.

Upon completion, the Pod will start a new container and kill the old one.

Conclusion

Terraform provides organizations with infrastructure as code, cloud platform management, and ability to create modules for self-service infrastructure. This is one example of how Terraform can help manage infrastructure and resources necessary to run a Kubernetes cluster and schedule the underlying resources.

For more information, check out the complete guide for ​​Managing Kubernetes with Terraform​​.

For more information on HashiCorp Terraform, visit ​​hashicorp.com/terraform.html​​.