Kubernetes Controller

Kubernetes is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. One of the core components of Kubernetes is the Controller. In this article, we will explore what a Kubernetes controller is, its role in the Kubernetes ecosystem, and how to create a custom controller using code examples.

What is a Kubernetes Controller?

A Kubernetes controller is a control loop that continuously monitors the desired state of a cluster and takes action to reconcile the current state with the desired state. It ensures that the desired state of the system is maintained and reacts to changes by creating, updating, or deleting resources.

Kubernetes provides several built-in controllers such as Deployment Controller, Replication Controller, StatefulSet Controller, CronJob Controller, etc. These controllers manage different types of resources and ensure that the desired number of replicas are running, maintain the desired state of the system, and handle failures.

Creating a Custom Controller

To create a custom controller in Kubernetes, we need to follow a few steps:

  1. Define the Custom Resource Definition (CRD): A CRD defines a new resource type in Kubernetes. It specifies the structure and validation rules for the custom resource. Let's consider an example of a custom resource called MyResource.

    apiVersion: apiextensions.k8s.io/v1
    kind: CustomResourceDefinition
    metadata:
      name: myresources.example.com
    spec:
      group: example.com
      versions:
        - name: v1
          served: true
          storage: true
      scope: Namespaced
      names:
        plural: myresources
        singular: myresource
        kind: MyResource
        shortNames:
        - mr
    
  2. Implement the Controller: We can use the Kubernetes client libraries to implement the logic of our custom controller. Let's consider a simple example where the controller watches for changes to MyResource and logs the name of the resource.

    package main
    
    import (
        "flag"
        "fmt"
        "os"
        "os/signal"
        "syscall"
        "time"
    
        "k8s.io/client-go/kubernetes"
        "k8s.io/client-go/tools/clientcmd"
        "k8s.io/client-go/util/workqueue"
        "k8s.io/client-go/tools/cache"
        metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
    )
    
    func main() {
        kubeconfig := flag.String("kubeconfig", "/path/to/kubeconfig", "Path to the kubeconfig file")
        flag.Parse()
    
        // Create a Kubernetes client
        config, err := clientcmd.BuildConfigFromFlags("", *kubeconfig)
        if err != nil {
            panic(err.Error())
        }
        clientset, err := kubernetes.NewForConfig(config)
        if err != nil {
            panic(err.Error())
        }
    
        // Create a work queue
        queue := workqueue.NewRateLimitingQueue(workqueue.DefaultControllerRateLimiter())
    
        // Create an informer to watch for changes to MyResource
        informer := cache.NewSharedIndexInformer(
            &cache.ListWatch{
                ListFunc: func(options metav1.ListOptions) (metav1.Object, error) {
                    return clientset.ExampleV1().MyResources("default").List(context.TODO(), options)
                },
                WatchFunc: func(options metav1.ListOptions) (watch.Interface, error) {
                    return clientset.ExampleV1().MyResources("default").Watch(context.TODO(), options)
                },
            },
            &examplev1.MyResource{},
            0, // resync period
            cache.Indexers{},
        )
    
        // Add event handlers to the informer
        informer.AddEventHandler(cache.ResourceEventHandlerFuncs{
            AddFunc: func(obj interface{}) {
                key, err := cache.MetaNamespaceKeyFunc(obj)
                if err == nil {
                    queue.Add(key)
                }
            },
            UpdateFunc: func(oldObj, newObj interface{}) {
                key, err := cache.MetaNamespaceKeyFunc(newObj)
                if err == nil {
                    queue.Add(key)
                }
            },
            DeleteFunc: func(obj interface{}) {
                // handle resource deletion
            },
        })
    
        // Start the informer and controller
        stopCh := make(chan struct{})
        defer close(stopCh)
        go informer.Run(stopCh)
        go controller.Run(queue, stopCh)
    
        // Wait for shutdown signal
        sigCh := make(chan os.Signal, 1)
        signal.Notify(sigCh, syscall.SIGINT, syscall.SIGTERM)
        <-sigCh
    }
    
  3. Reconcile the State: The controller's main loop dequeues items from the work queue and reconciles the state accordingly. Let's modify the controller to log the name of the resource.

    func Run(queue workqueue.RateLimitingInterface, stopCh <-chan struct{}) {
        for {
            key, quit := queue.Get()
            if quit {
                return
            }
    
            // Log the name of the resource
            fmt.Println(key)
    
            // Mark the item as processed
            queue.Forget(key)
            queue.Done(key)
        }
    }
    

Conclusion

In this article