Kubernetes in docker

If you’ve ever wanted to experiment with Kubernetes but you felt the pressure of the overhead of setting up a full cluster, Kind (Kubernetes in Docker) is your new best friend. Kind makes it easy to spin up lightweight Kubernetes clusters inside Docker containers, perfect for local development, testing CI/CD pipelines, or learning Kubernetes without the complexity

It’s a powerful tool that you can use to experiment with Kubernetes, build prototypes, run tests, and even integrate into CI pipelines.

In this post, we’ll walk through the basics of using Kind and see how easy it is to get a cluster running for everyday tasks.

Install kind and kubectl

Install kind with the following commands:

BASH
[ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.30.0/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind
Click to expand and view more

then install kubectl:

BASH
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x kubectl
mkdir -p ~/.local/bin
mv ./kubectl /usr/local/bin/kind
Click to expand and view more

Create a cluster

Now that we have all the requirements, create a directory to store your configuration files:

BASH
mkdir ~/kind/
Click to expand and view more

Now define a cluster configuration file named ~/kind/2-node.yaml with the following content:

YAML
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: kubetest
nodes:
- role: control-plane
- role: worker
# Networking section, may be useful in certain circumstances to avoid IP conflicts
# In this case you can keep these line commented
#networking:
#  podSubnet: "10.244.0.0/16"
#  serviceSubnet: "10.96.0.0/12"
Click to expand and view more

This configuration defines one control plane node and one worker node. Create a cluster:

BASH
test@test:~/kind$ kind create cluster --config 2-node.yaml
Creating cluster "kubetest" ...
 ✓ Ensuring node image (kindest/node:v1.34.0) 🖼 
 ✓ Preparing nodes 📦 📦  
 ✓ Writing configuration 📜 
 ✓ Starting control-plane 🕹️ 
 ✓ Installing CNI 🔌 
 ✓ Installing StorageClass 💾 
 ✓ Joining worker nodes 🚜 
Set kubectl context to "kind-kubetest"
You can now use your cluster with:

kubectl cluster-info --context kind-kubetest

Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂
test@test:~/kind$ 
Click to expand and view more

Next, configure kubectl to connect to the cluster:

BASH
kubectl cluster-info --context kind-kubetest
Click to expand and view more

Check that everything is working correctly:

BASH
test@test:~/kind$ kubectl get po -A
NAMESPACE            NAME                                             READY   STATUS    RESTARTS   AGE
kube-system          coredns-66bc5c9577-4hzsw                         1/1     Running   0          12m
kube-system          coredns-66bc5c9577-msqvz                         1/1     Running   0          12m
kube-system          etcd-kubetest-control-plane                      1/1     Running   0          12m
kube-system          kindnet-p58xh                                    1/1     Running   0          12m
kube-system          kindnet-r974g                                    1/1     Running   0          12m
kube-system          kube-apiserver-kubetest-control-plane            1/1     Running   0          12m
kube-system          kube-controller-manager-kubetest-control-plane   1/1     Running   0          12m
kube-system          kube-proxy-c525v                                 1/1     Running   0          12m
kube-system          kube-proxy-kzwj9                                 1/1     Running   0          12m
kube-system          kube-scheduler-kubetest-control-plane            1/1     Running   0          12m
local-path-storage   local-path-provisioner-7b8c8ddbd6-cjvw5          1/1     Running   0          12m
test@test:~/kind$ kubectl get nodes
NAME                     STATUS   ROLES           AGE   VERSION
kubetest-control-plane   Ready    control-plane   12m   v1.34.0
kubetest-worker          Ready    <none>          12m   v1.34.0
Click to expand and view more

You should see all system pods running and both nodes in the Ready state.

Requirements for Service type LoadBalancer

To use services of type LoadBalancer in Kind, you need to install the cloud-provider-kind binary on the host system. This allows you to expose services externally, similar to what you’d expect in a production environment.

Download the latest release from the “Cloud Provide Kind - Release Page”

Then unpack and install it:

BASH
tar -xvf cloud-provider-kind_<version>.2_linux_amd64.tar.gz
sudo install cloud-provider-kind /usr/local/bin/
Click to expand and view more

Now run it and keep it running until we destroy our kind cluster (sudo is necessary to make cloud-provider-kind able to add an IP to the docker network interface in the host):

BASH
sudo nohup cloud-provider-kind -enable-log-dumping -logs-dir ~/kind/ &
Click to expand and view more

then check the nohup.out log, to be sure everything is working as it should and no errors are reported:

BASH
sudo cat nohup.out
Click to expand and view more

Let’s test it!

Create an nginx deployment

BASH
kubectl create deployment nginx --image=nginx:latest
kubectl expose deployment nginx --port=80 --protocol=TCP --target-port=80 --name=nginx-svc --type=LoadBalancer
Click to expand and view more

Verify that the service was assigned an external IP:

BASH
test@test:~$ kubectl get svc nginx-svc
NAME        TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
nginx-svc   LoadBalancer   10.96.80.114   172.19.0.4    80:31254/TCP   2m55s
Click to expand and view more

Now curl the service from the host, it should be reachable:

BASH
test@test:~/kind$ curl $(kubectl get svc nginx-svc -o=jsonpath='{.status.loadBalancer.ingress[0].ip}'):80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
Click to expand and view more

At this point, a new log file should be available, which can be used for troubleshooting or to understand the magic behind these things:

BASH
cat ~/kind/default_nginx-svc.log
Click to expand and view more

Once you’re done, delete the cluster using the command:

BASH
kind delete cluster --name kubetest
Click to expand and view more

that’s all! Right? Yes, you can play with you kubernetes cluster as long as you like, but if you want to see also something more interesting…let’s explore some more “advanced” scenarios.

Configure a private registry

What if you need to use a private registry in your kind cluster?

Well, from our kind host we need to login against the registry. So if you are going to pull a private image stored on DockerHub you just need to execute:

BASH
docker login 
Click to expand and view more

instead if you need to login against a different registry:

BASH
 docker login registry.example.com[:port]
Click to expand and view more

and follow the login wizard in the web browser. At the end you will have a file $HOME/.docker/config.json with all the credentials and info that we need. So let’s mount that file in our kind cluster nodes.

So on each node that need to use the registry, we need to add that file as extraMount, as shown here:

YAML
[...]
nodes:
- role: control-plane
  extraMounts:
  - containerPath: /var/lib/kubelet/config.json
    hostPath: /home/test/.docker/config.json # change the path accordingly
- role: worker
  extraMounts:
  - containerPath: /var/lib/kubelet/config.json
    hostPath: /home/test/.docker/config.json # change the path accordingly
[...]
Click to expand and view more

then recreate the cluster with:

BASH
kind create cluster --config 2-node.yaml
kubectl cluster-info --context kind-kubetest
Click to expand and view more

Recreate the cluster and you’ll be able to pull images from private registry/repository.

Use a different CNI

Do you want mock the kubernetes infrastructure you have in your production cluster as much as possible, deploying the same CNI? Kind allow us to do that too.

In your cluster configuration, disable the default CNI:

BASH
networking:
  # the default CNI will not be installed
  disableDefaultCNI: true
Click to expand and view more

If you also want to replace kube-proxy:

BASH
networking:
  disableDefaultCNI: true
  kubeProxyMode: "none" # Options: iptables, nftables, ipvs or none
Click to expand and view more

For example, to use Cilium in this scenario, you can create a 4-node cluster:

YAML
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: kubetest
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker
networking:
  disableDefaultCNI: true
  kubeProxyMode: "none"
Click to expand and view more

Create the cluster:

BASH
kind create cluster --config 4-node-nocni.yaml
kubectl cluster-info --context kind-kubetest
Click to expand and view more

Then install Cilium using [Cilium documentation](Cilium Quick Installation — Cilium 1.18.2 documentation ), installing cilium-cli in the host and then using it to deploy cilium with the command:

BASH
cilium install --context kind-kubetest \
    --version 1.18.2 \
    --set kubeProxyReplacement=true,routingMode=native,autoDirectNodeRoutes=true,loadBalancer.mode=dsr,ipv4NativeRoutingCIDR="10.244.0.0/16"
Click to expand and view more

In this scenario, I added some more setting: if you are curiours look at them. I’m sure some of those may be beneficial also for you use-case. Maybe I will write a separate blog on this topic. Oh, you don’t need to worry about the “10.244.0.0/16” setting, it’s the default in kind. If you don’t change it in your kind configuration file explicitly, you don’t need to edit this field here.

So let’s validate the setup. Let’s check the overall status:

BASH
test@test:~/kind$ cilium status --wait
    /¯¯\
 /¯¯\__/¯¯\    Cilium:             OK
 \__/¯¯\__/    Operator:           OK
 /¯¯\__/¯¯\    Envoy DaemonSet:    OK
 \__/¯¯\__/    Hubble Relay:       disabled
    \__/       ClusterMesh:        disabled

DaemonSet              cilium                   Desired: 4, Ready: 4/4, Available: 4/4
DaemonSet              cilium-envoy             Desired: 4, Ready: 4/4, Available: 4/4
Deployment             cilium-operator          Desired: 1, Ready: 1/1, Available: 1/1
Containers:            cilium                   Running: 4
                       cilium-envoy             Running: 4
                       cilium-operator          Running: 1
                       clustermesh-apiserver    
                       hubble-relay             
Cluster Pods:          3/3 managed by Cilium
Helm chart version:    1.18.2
Image versions         cilium             quay.io/cilium/cilium:v1.18.2@sha256:858f807ea4e20e85e3ea3240a762e1f4b29f1cb5bbd0463b8aa77e7b097c0667: 4
                       cilium-envoy       quay.io/cilium/cilium-envoy:v1.34.7-1757592137-1a52bb680a956879722f48c591a2ca90f7791324@sha256:7932d656b63f6f866b6732099d33355184322123cfe1182e6f05175a3bc2e0e0: 4
                       cilium-operator    quay.io/cilium/operator-generic:v1.18.2@sha256:cb4e4ffc5789fd5ff6a534e3b1460623df61cba00f5ea1c7b40153b5efb81805: 1
Click to expand and view more

and then check is the KubeProxyReplacement setting was set correctly:

BASH
test@test:~/kind$ kubectl -n kube-system exec ds/cilium -- cilium-dbg status | grep KubeProxyReplacement
KubeProxyReplacement:    True   [eth0    172.19.0.2 fc00:f853:ccd:e793::2 fe80::50f9:75ff:feca:eeb (Direct Routing)]
Click to expand and view more

So Cilium is correctly replacing kube-proxy. Everythig looks fine. Finally, test networking by deploying an app with multiple replicas, to see if the pods networking works as expected:

BASH
kubectl create deployment nginx --image=nginx:latest --replicas=6
Click to expand and view more

and check them:

BASH
test@test:~/kind$ kubectl get po -o wide
NAME                     READY   STATUS    RESTARTS   AGE   IP             NODE               NOMINATED NODE   READINESS GATES
nginx-7c5d8bf9f7-45l2d   1/1     Running   0          21s   10.244.1.241   kubetest-worker    <none>           <none>
nginx-7c5d8bf9f7-gpstv   1/1     Running   0          21s   10.244.3.54    kubetest-worker2   <none>           <none>
nginx-7c5d8bf9f7-kr8v6   1/1     Running   0          21s   10.244.3.226   kubetest-worker2   <none>           <none>
nginx-7c5d8bf9f7-qrxr5   1/1     Running   0          21s   10.244.2.153   kubetest-worker3   <none>           <none>
nginx-7c5d8bf9f7-x57nc   1/1     Running   0          21s   10.244.2.150   kubetest-worker3   <none>           <none>
nginx-7c5d8bf9f7-zb56n   1/1     Running   0          21s   10.244.1.166   kubetest-worker    <none>           <none>
Click to expand and view more

You should see pods distributed across nodes with proper IPs, as shown above. Now you can enjoy you kubernetes custer, using kind with the CNI you have in your production environments!

Whether you’re experimenting with Kubernetes, testing configurations, or replicating parts of a production environment (including CNIs and private registries), it makes the process simple and fast. If you’re a DevOps engineer or system administrator looking for a reliable way to spin up test clusters, Kind is definitely worth adding to your toolkit.

Have fun experimenting, and happy clustering!

Start searching

Enter keywords to search articles

↑↓
ESC
⌘K Shortcut