Hello Devs 👋 ! . In this post, we’ll explore another Kubernetes object, DaemonSet.
· Understanding Kubernetes DaemonSet
∘ What is a DaemonSet?
∘ How Do DaemonSets Work?
∘ Common Use Cases of DaemonSet
· How Daemon Pods are scheduled
· Real-world examples
∘ Minikube Kubernetes cluster
∘ Creating a DaemonSet
∘ Updating a DaemonSet
∘ Deleting a DaemonSet
· Conclusion
· References
This series of stories shows how to use Kubernetes in the Spring ecosystem. We work with a Spring Boot API and Minikube to have a lightweight and fast development environment similar to production.
- Lab1 (Spring Boot/K8S): Deploy Spring Boot application on Kubernetes
- Lab2 (Spring Boot/K8S): Kubernetes health probes with Spring Boot
- Lab3 (Spring Boot/K8S): Mastering ConfigMaps in Kubernetes
- Lab4 (Spring Boot/K8S): Using Kubernetes Secrets in Spring Boot
- Lab5 (Spring Boot/K8S): Understanding Kubernetes Resources Management
- Lab6 (Spring Boot/K8S): Persistent Volumes in Kubernetes
- Lab7 (Spring Boot/K8S): Spring Batch on Kubernetes — Jobs and CronJobs
- Lab8 (Spring Boot/K8S): Deploy a Spring Boot application on Kubernetes using Helm Chart
- 👉 Lab9 (Spring Boot/K8S): Understanding Kubernetes DaemonSet
When we create a Deployment, we can specify the number of replicas and then Kubernetes creates a ReplicaSet and schedules the number of replicas of the Pod on the nodes in the cluster. A DaemonSet is another controller that manages pods like Deployments, ReplicaSets, and StatefulSets. Let’s get more details about DaemonSets.
Understanding Kubernetes DaemonSet
What is a DaemonSet?
A DaemonSet is a Kubernetes resource that ensures a specified Pod runs on all nodes or a specific subset of nodes in a cluster. As nodes are added to the cluster, Pods are added to them. Those pods are collected as garbage as nodes are removed from the cluster. Deleting a DaemonSet will clean up the Pods it created.
It’s used for deploying background services across clusters, providing support services for every node — such as system operations services, collecting logs, monitoring frameworks like Prometheus, and storage volumes.
How Do DaemonSets Work?
A Daemonset is another controller that manages pods like Deployments, ReplicaSets, and StatefulSets. The DaemonSet controller uses a reconciliation loop to check the current state of nodes, and if they are not currently running the required pod, it will run them.
The DaemonSet controller reconciliation process reviews both existing nodes and newly created nodes. By default, the Kubernetes scheduler ignores the pods created by the DaemonSet and lets them exist on the node until the node itself is shut down. If a new node is added to the cluster, then the DaemonSet controller notices it is missing a Pod and adds it to the new node.
Common Use Cases of DaemonSet
Some common use cases of DaemonSets are as follows:
- Monitoring Agents: Running a daemon for monitoring frameworks, such as Prometheus.
- Logging Agents: Running a logs collection daemon on every node. For example, tools such as Fluentd or Logstash.
- Node resource monitoring: Running a daemon to monitor resource utilization (CPU usage, memory usage, disk usage, and other resource metrics) on each node in a cluster. Prometheus for example.
- Cluster storage: Running a cluster storage daemon on every node in a cluster. For example glusterd — ceph.
In a simple case, one DaemonSet, covering all nodes, would be used for each type of daemon. A more complex setup might use multiple DaemonSets for a single type of daemon but with different flags and/or different memory and CPU requests for different hardware types.
How Daemon Pods are scheduled
The DaemonSet controller creates a Pod for each eligible node and adds the spec.affinity.nodeAffinity field of the Pod to match the target host. After the Pod is created, the default scheduler typically takes over and then binds the Pod to the target host by setting the .spec.nodeName field. If the new Pod cannot fit on the node, the default scheduler may preempt (evict) some of the existing Pods based on the priority of the new Pod.
The user can specify a different scheduler for the Pods of the DaemonSet, by setting the .spec.template.spec.schedulerName field of the DaemonSet.
The original node affinity specified at the .spec.template.spec.affinity.nodeAffinity field (if specified) is taken into consideration by the DaemonSet controller when evaluating the eligible nodes, but is replaced on the created Pod with the node affinity that matches the name of the eligible node.
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchFields:
- key: metadata.name
operator: In
values:
- target-host-name
Real-world examples
Now that we understand a DaemonSet and its most common use cases, let’s explore it in a simple example in the Kubernetes cluster.
Minikube Kubernetes cluster
We start by creating a Kubernetes cluster multi-node (1 master node and 2 worker nodes) using Minikube.
$ minikube start --nodes 3 -p daemonset-cluster --driver=docker
....
🏄 Done! kubectl is now configured to use "daemonset-cluster" cluster and "default" namespace by default
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
daemonset-cluster Ready control-plane 119s v1.28.3
daemonset-cluster-m02 Ready <none> 96s v1.28.3
daemonset-cluster-m03 Ready <none> 49s v1.28.3
Creating a DaemonSet
Here is a simple manifest for a DaemonSet that runs Fluentd logging. This is a good example of using a DaemonSet to collect logs on each node in our cluster.
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
spec:
selector:
matchLabels:
name: fluentd
tier: logging
template:
metadata:
labels:
name: fluentd
tier: logging
spec:
containers:
- name: fluentd-elasticsearch
image: quay.io/fluentd_elasticsearch/fluentd:latest
resources:
requests:
cpu: "100m"
memory: "128Mi"
limits:
memory: "256Mi"
As with all other Kubernetes configs, a DaemonSet needs apiVersion, kind, and metadata fields. In this case, the “kind” field is DaemonSet.
Create DaemonSet by using kubectl command:
$ kubectl apply -f daemonset.yaml
daemonset.apps/fluentd created
Let’s see the created DaemonSet:
$ kubectl get ds -o wide
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE CONTAINERS IMAGES SELECTOR
fluentd 3 3 3 3 3 <none> 5m13s fluentd-elasticsearch quay.io/fluentd_elasticsearch/fluentd:latest name=fluentd,tier=logging

As we can see Kubernetes has automatically scheduled a Fluentd Pod onto each of the three nodes.
If we specify a .spec.template.spec.nodeSelector, then the DaemonSet controller will create Pods on nodes that match that node selector. Likewise, if we specify a .spec.template.spec.affinity, then the DaemonSet controller will create Pods on nodes that match that node affinity. If you do not specify either, then the DaemonSet controller will create Pods on all nodes.
Get more details about the DaemonSet with the “kubectl describe” command:
$ kubectl describe daemonsets fluentd
Name: fluentd
Selector: name=fluentd,tier=logging
Node-Selector: <none>
Labels: <none>
Annotations: deprecated.daemonset.template.generation: 1
Desired Number of Nodes Scheduled: 3
Current Number of Nodes Scheduled: 3
Number of Nodes Scheduled with Up-to-date Pods: 3
Number of Nodes Scheduled with Available Pods: 3
Number of Nodes Misscheduled: 0
Pods Status: 3 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: name=fluentd
tier=logging
Containers:
fluentd-elasticsearch:
Image: quay.io/fluentd_elasticsearch/fluentd:latest
Port: <none>
Host Port: <none>
Limits:
memory: 256Mi
Requests:
cpu: 100m
memory: 128Mi
Environment: <none>
Mounts: <none>
Volumes: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 22m daemonset-controller Created pod: fluentd-qhc75
Normal SuccessfulCreate 22m daemonset-controller Created pod: fluentd-lhc7q
Normal SuccessfulCreate 22m daemonset-controller Created pod: fluentd-dq658
Updating a DaemonSet
A DaemonSet can be updated by changing one of the following:
- Pod specification
- Resource requests
- Resource limits
- Labels
- Annotations
If node labels are changed, the DaemonSet will promptly add Pods to newly matching nodes and delete Pods from newly not-matching nodes.
We can modify the Pods that a DaemonSet creates. However, Pods do not allow all fields to be updated. Also, the DaemonSet controller will use the original template the next time a node (even with the same name) is created.
Deleting a DaemonSet
We can delete a DaemonSet using the “kubectl delete” command.
$ kubectl delete daemonset/fluentd
daemonset.apps "fluentd" deleted
$ kubectl get ds
No resources found in default namespace.
$ kubectl get pods
No resources found in default namespace.
We can also remove a DaemonSet object by specifying --cascade=orphan. $ kubectl delete daemonset/fluentd --cascade=orphanl
The Pods will be left on the nodes. If you subsequently create a new DaemonSet with the same selector, the new DaemonSet adopts the existing Pods. If any Pods need replacing the DaemonSet replaces them according to its updateStrategy.
Conclusion
In this post, we have learned how to use a DaemonSet in the Kubernetes cluster.
The complete source code of this series is available on GitHub.
Support me through GitHub Sponsors.
Thank you for Reading !! See you in the next post.