[ kubernetes  linux  opensource  ]

Rolling updates and update strategy in Kubernetes daemonsets

Daemonset ensures that all the nodes run a copy of a pod. It can be used for running storage/monitoring daemons like glusterd,Prometheus etc. Now in this post we are going to see how to create a daemonset and do an image update. We are also going to perform different update strategy and watch the behaviour of damonset updates.

Setup

I am using the Virtualbox(running in Ubuntu 18.04 physical machine) for this entire setup . The physical machine is Dell inspiron laptop with 12GB RAM , Intel® Core™ i7-6500U CPU @ 2.50GHz × 4 and 512GB SSD hardisk.

Step 1: Create a daemon set

vikki@kubernetes1:~$ kubectl create -f ds.yaml 
daemonset.apps/fluentd-elasticsearch created
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd-elasticsearch
namespace: default
labels:
k8s-app: fluentd-logging
spec:
selector:
matchLabels:
name: fluentd-elasticsearch
template:
metadata:
labels:
name: fluentd-elasticsearch
spec:
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: fluentd-elasticsearch
image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
view raw ds.yaml hosted with ❤ by GitHub
vikki@kubernetes1:~$ kubectl get ds
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
fluentd-elasticsearch 2 2 2 2 2 <none> 5s

Step 2: Verify the image version and updateStrategy of the daemonset

vikki@kubernetes1:~$ kubectl describe ds fluentd-elasticsearch |grep Image
    Image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2

vikki@kubernetes1:~$ kubectl get ds fluentd-elasticsearch -o yaml |grep -A 3 -i updateStrategy
    updateStrategy:
    rollingUpdate:
        maxUnavailable: 1
    type: RollingUpdate

vikki@kubernetes1:~$ kubectl get pod fluentd-elasticsearch-
fluentd-elasticsearch-7ds49 fluentd-elasticsearch-qh8n8  
vikki@kubernetes1:~$ kubectl describe pod fluentd-elasticsearch-7ds49 |grep Image:
    Image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
    

The daemonset has a image version of “ 2.5.2” and updateStrategy “ RollingUpdate

Step 3: Update the daemon set container image to a different version say “2.5.2”

vikki@kubernetes1:~$ kubectl set image ds fluentd-elasticsearch fluentd-elasticsearch=quay.io/fluentd_elasticsearch/fluentd:v2.5.1
daemonset.apps/fluentd-elasticsearch image updated

step 4 : Verify the image version is updated in daemonset level also in pod level

vikki@kubernetes1:~$ kubectl describe ds fluentd-elasticsearch |grep Image
    Image: quay.io/fluentd_elasticsearch/fluentd:v2.5.1

vikki@kubernetes1:~$ kubectl describe pod fluentd-elasticsearch-
fluentd-elasticsearch-qh8n8 fluentd-elasticsearch-x2f57  
vikki@kubernetes1:~$ kubectl describe pod fluentd-elasticsearch-qh8n8 |grep Image:
    Image: quay.io/fluentd_elasticsearch/fluentd:v2.5.1

Now we can see , as soon as the daemonset image is updated , the pod image also gets updated. This is because of the updateStrategy set as RollingUpdate

Step 5: Now lets change the updateStrategy to OnDelete and watch the behaviour

vikki@kubernetes1:~$ kubectl get ds fluentd-elasticsearch -o yaml > ds_new.yaml 
vikki@kubernetes1:~$ vim ds_new.yaml 
    
    
    
apiVersion: apps/v1
kind: DaemonSet
metadata:
annotations:
deprecated.daemonset.template.generation: "3"
creationTimestamp: "2019-11-23T07:30:57Z"
generation: 4
labels:
k8s-app: fluentd-logging
name: fluentd-elasticsearch
namespace: default
resourceVersion: "26767"
selfLink: /apis/apps/v1/namespaces/default/daemonsets/fluentd-elasticsearch
uid: 36223bfd-51d5-4103-8509-eca3c09b8e85
spec:
revisionHistoryLimit: 10
selector:
matchLabels:
name: fluentd-elasticsearch
template:
metadata:
creationTimestamp: null
labels:
name: fluentd-elasticsearch
spec:
containers:
- image: quay.io/fluentd_elasticsearch/fluentd:v2.5.0
imagePullPolicy: IfNotPresent
name: fluentd-elasticsearch
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/log
name: varlog
- mountPath: /var/lib/docker/containers
name: varlibdockercontainers
readOnly: true
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
volumes:
- hostPath:
path: /var/log
type: ""
name: varlog
- hostPath:
path: /var/lib/docker/containers
type: ""
name: varlibdockercontainers
updateStrategy:
rollingUpdate:
maxUnavailable: 1
type: OnDelete
status:
currentNumberScheduled: 2
desiredNumberScheduled: 2
numberMisscheduled: 0
numberReady: 0
numberUnavailable: 2
observedGeneration: 4
updatedNumberScheduled: 2
view raw ds_new.yaml hosted with ❤ by GitHub
vikki@kubernetes1:~$ kubectl apply -f ds_new.yaml 
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
daemonset.apps/fluentd-elasticsearch configured
vikki@kubernetes1:~$ kubectl describe ds fluentd-elasticsearch |grep Image
    Image: quay.io/fluentd_elasticsearch/fluentd:v2.5.1

vikki@kubernetes1:~$ kubectl get ds fluentd-elasticsearch -o yaml |grep -A 3 -i updateStrategy
--
    updateStrategy:
    rollingUpdate:
        maxUnavailable: 1
    type: OnDelete

vikki@kubernetes1:~$ kubectl describe pod fluentd-elasticsearch-qh8n8 |grep Image:
    Image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2
    

The udpate strategy is changed to OnDelete and the version in 2.5.2

Step 6: Lets change the image to different version

vikki@kubernetes1:~$ kubectl set image ds fluentd-elasticsearch fluentd-elasticsearch=quay.io/fluentd_elasticsearch/fluentd:v2.5.0
daemonset.apps/fluentd-elasticsearch image updated

Step 7: Verify the image version in daemonset and pod

vikki@kubernetes1:~$ kubectl describe ds fluentd-elasticsearch |grep Image
    Image: quay.io/fluentd_elasticsearch/fluentd:v2.5.0

vikki@kubernetes1:~$ kubectl describe pod fluentd-elasticsearch-qh8n8 |grep Image:
    Image: quay.io/fluentd_elasticsearch/fluentd:v2.5.2

Now we can see the image version on Daemonset is updated , but the pod is still running in the older version

Step 8: Lets delete the pod and wait for the new pods to be created and verify the image version in new pod

vikki@kubernetes1:~$ kubectl delete pod fluentd-elasticsearch-
fluentd-elasticsearch-qh8n8 fluentd-elasticsearch-x2f57  

vikki@kubernetes1:~$ kubectl delete pod fluentd-elasticsearch-qh8n8 fluentd-elasticsearch-x2f57
pod "fluentd-elasticsearch-qh8n8" deleted
pod "fluentd-elasticsearch-x2f57" deleted
vikki@kubernetes1:~$ kubectl get pod fluentd-elasticsearch-
fluentd-elasticsearch-89g44 fluentd-elasticsearch-mf5f9  

vikki@kubernetes1:~$ kubectl get pod fluentd-elasticsearch-
fluentd-elasticsearch-89g44 fluentd-elasticsearch-mf5f9  

vikki@kubernetes1:~$ kubectl describe pod fluentd-elasticsearch-89g44 |grep Image:
    Image: quay.io/fluentd_elasticsearch/fluentd:v2.5.0

Now we can see the version is automatically updated on the new pod only after delete.

Share on:

Discussion and feedback