./Crashloop.sh

Kubernetes Native Support For Sidecar Containers

/logo.pngYoucef GuichiMay 1, 2024

Dear fellow Cloud citizens! In this article, we will go through the classical implementation of sidecar containers and how they are used, What twists you might face, and the workarounds the community has been using. Next, we present the new native sidecar feature introduced on k8s 1.28, and how it tackles the issue.

Happy reading!

Sidecars vs Init Containers

In Kubernetes, both sidecars and init containers serve distinct roles within pods. Sidecar containers operate alongside the main application container, providing supplementary functionality such as logging, monitoring, or proxying. On the other hand, init containers execute initialization tasks before the main application container starts. They run to completion independently of the main container, ensuring that prerequisites like configuration, setup, or data population are fulfilled before the primary application begins.

The Problem before k8s 1.28

For the sidecar containers as before 1.28, k8s does not know which one you are using as a sidecar and which one as a main container, that being said, the responsibility of distinguishing and managing sidecars was handed to us.

Let's say your main container runs a job, and in order for it to function, it needs a pre established service to perform communication through localhost. How can this be implemented in such a scenario? Well, your main container job should have a health check for the sidecar service. If the healthcheck passes, the service is ready and the main container's job can start.

Now, let's suppose the job has been accomplished successfully, would that lead the pod to enter completion state? Unfortunately,........ NO!

Remember, we still have the sidecar container running a service for us, and it has an independent lifecycle from the main container. What should we do in this case? The work around here is to notify the sidecar container about the main container‘s job success. You can enter the info to a file in a shared volume, and then the sidecar can check for it periodically. If it is a success, it will exit. Since all containers finished successfully, the pod now enters a completion state.

Init containers, on the other hand, can ensure the sequencing for our job. However, the init containers run till completion before the main container starts, so the main container can't access the init container.

So, what's next?

Native sidecare containers support in k8s 1.28

How k8s maintainers tackle the issue in 1.28?

One extra field is added to the init container definition restartPolicy: Always if it is set. kuberenetes will handle the init container as a normal sidecar. In addition, the sidecar will run first just as a regular init container; once ready, the main container can start.

Let's perform some experiments as follows!

We have to enable SidecarContainers: true in order to experiment with this feature.

#cluster-config.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
featureGates:
  SidecarContainers: true

Spin up a cluster using kind

kind create cluster --name sidecars --config=cluster-config.yaml

We have a sample job that has an init container with restartPolicy set to Always the init container is serving an api at 0.0.0.0:80.

The main container is an nginx image that will curl the api defined by the image in the init container.

apiVersion: batch/v1
kind: Job
metadata:
  name:
spec:
  template:
    spec:
      initContainers:
        - name: podbee
          image: ghcr.io/biznesbees/podbee:v0.1.1
          restartPolicy: Always
      containers:
        - name: nginx
          image: nginx
          command:
            - sh
            - -c
            - curl 0.0.0.0:80 # podbee's api
      restartPolicy: Never
  backoffLimit: 4

As we see below, the init container initialized first as expected, then we have pod Initialization, the interesting part is that both containers were running at the same time.

podbee-gf9x8 2/2 Running 0 2m21s

➜  root git:(main) ✗ kubectl get po -w
NAME           READY   STATUS     RESTARTS   AGE
podbee-gf9x8   0/2     Init:0/1   0          71s
podbee-gf9x8   1/2     PodInitializing   0          2m16s
podbee-gf9x8   2/2     Running           0          2m21s
podbee-gf9x8   1/2     Completed         0          2m30s

If we describe the pod, and we check, we see that after the main container finishes, kubernetes kills the sidecar container afterwards. The life cycle becomes the same as the pod, main container and the sidecar.

Events:
  Normal  Created    98s    kubelet            Created container debian
  Normal  Started    98s    kubelet            Started container debian
  Normal  Killing    89s    kubelet            Stopping container podbee

Also as we see here in the logs, the main container 27.0.0.1:52756 which is our nginx was able to curl the sidecar at 0.0.0.0:80

➜ root git:(main) ✗ kubectl logs podbee-nggdg -c podbee
INFO:     Started server process [13]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
INFO:     Uvicorn running on http://0.0.0.0:80 (Press CTRL+C to quit)
INFO:     127.0.0.1:52756 - "GET / HTTP/1.1" 200 OK
➜ root git:(main) ✗ kubectl logs podbee-nggdg -c nginx
{"message":"Thank you for using PodBee!"}

Conclusion

  • The feature looks promising as it takes the weight off the developer's shoulders, and leverages a native solution.
  • For now, it is still an experimental feature, and not recommended for production use.
  • For more details, you may take a look at the official announcement at kubernetes website