본문 바로가기

CKA &. CKAD/Cluster Maintenance

Operating system Upgrade

 

 

[Practice Test]

 

ㅁ Let us explore the environment first. How many nodes do you see in the cluster? (Including the master/controlplan and workers)

kubectl get nodes

ㅁ How many application do you see hosted on the cluster? (Check the number of deployments)

kubectl get deployment

ㅁ On which nodes are the applications hosted on?

kubectl get node -o wide

NAME                    READY   STATUS    RESTARTS   AGE    IP           NODE     NOMINATED NODE   READINESS GATES
blue-746c87566d-bpfql   1/1     Running   0          119s   10.244.1.5   node03   <none>           <none>
blue-746c87566d-hczfq   1/1     Running   0          119s   10.244.2.3   node02   <none>           <none>
blue-746c87566d-lz9lg   1/1     Running   0          119s   10.244.3.4   node01   <none>           <none>
red-75f847bf79-nqntb    1/1     Running   0          119s   10.244.2.2   node02   <none>           <none>
red-75f847bf79-wlqz9    1/1     Running   0          119s   10.244.3.3   node01   <none>           <none>

 

ㅁ We need to take node01 out for maintenance. Empty the node of all application and mark it unschedulable

kubectl drain node01 --ignore-daemonsets
controlplane $ kubectl drain node01 --ignore-daemonsets

node/node01 cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-flannel-ds-amd64-gvrwj, kube-system/kube-proxy-2lgrj
evicting pod default/blue-746c87566d-lz9lg
evicting pod default/red-75f847bf79-wlqz9
evicting pod kube-system/coredns-f9fd979d6-gwrqk
pod/red-75f847bf79-wlqz9 evicted
pod/blue-746c87566d-lz9lg evicted
pod/coredns-f9fd979d6-gwrqk evicted
node/node01 evicted
controlplane $ kubectl get nodes

NAME           STATUS                     ROLES    AGE   VERSION
controlplane   Ready                      master   12m   v1.19.0
node01         Ready,SchedulingDisabled   <none>   12m   v1.19.0
node02         Ready                      <none>   12m   v1.19.0
node03         Ready                      <none>   12m   v1.19.0

 

ㅁ What nodes are the apps on now?

 [master, node01]

 [master, node02]

 [node02, node03]

 [node01, node03]

controlplane $ kubectl get pod -o wide

NAME                    READY   STATUS    RESTARTS   AGE     IP           NODE     NOMINATED NODE   READINESS GATES
blue-746c87566d-bpfql   1/1     Running   0          9m23s   10.244.1.5   node03   <none>           <none>
blue-746c87566d-hczfq   1/1     Running   0          9m23s   10.244.2.3   node02   <none>           <none>
blue-746c87566d-plbg4   1/1     Running   0          5m20s   10.244.1.6   node03   <none>           <none>
red-75f847bf79-nqntb    1/1     Running   0          9m23s   10.244.2.2   node02   <none>           <none>
red-75f847bf79-vg8md    1/1     Running   0          5m20s   10.244.1.7   node03   <none>           <none>

 

ㅁ The maintenance tasks have been completed. Configure the node to be schdulable again

kubectl uncordon node01

node/node01 uncordoned
controlplane $ kubectl get nodes

NAME           STATUS   ROLES    AGE   VERSION
controlplane   Ready    master   17m   v1.19.0
node01         Ready    <none>   17m   v1.19.0
node02         Ready    <none>   17m   v1.19.0
node03         Ready    <none>   17m   v1.19.0

ㅁ How many pods are scheduled on node01 now?

[3]. [1], [0], [2]

kubectl get pods -o wide

ㅁ Why are there no pods on node01?

[node01 is faulty]

[Only when new pods are created they will be scheduled]

[node01 did not upgrade successfully]

[node01 is cordoned]

 

ㅁ Why are there no pods placed on the master node? (Check the master/controlplane node details)

[master/controlplane node is cordoned]

[you can never have pods on master nodes]

[master/controlplane node is faulty]

[master/controlplane node has taints set on it]

 

ㅁ It is now time to take down node02 for maintenance. Before you remove all workload from node02 answer the following question.

Can you drain node02 using the same command as node01? Try it.

[Yes], [No, you must force it]

kubectl drain node02 --ignore-daemonsets

node/node02 cordoned
error: unable to drain node "node02", aborting command...

There are pending nodes to be drained:
 node02
error: cannot delete Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet (use --force to override): default/hr-app

ㅁ Why do you need to force the drain?

[node02 has a pod not part of a replicaset]

[node02 has taints on it]

[Only one node can be taken down in a day]

[Node02 is faulty]

 

ㅁ What is the name of the POD hosted on node02 that is not part of a replicaset?

[hr-app], [red], [blue], [redis]

 

kubectl get replicaset

NAME              DESIRED   CURRENT   READY   AGE
blue-746c87566d   3         3         3       25m
red-75f847bf79    2         2         2       25m
kubectl get pods -o wide

NAME                    READY   STATUS    RESTARTS   AGE     IP           NODE     NOMINATED NODE   READINESS GATES
blue-746c87566d-bpfql   1/1     Running   0          25m     10.244.1.5   node03   <none>           <none>
blue-746c87566d-hczfq   1/1     Running   0          25m     10.244.2.3   node02   <none>           <none>
blue-746c87566d-plbg4   1/1     Running   0          21m     10.244.1.6   node03   <none>           <none>
hr-app                  1/1     Running   0          4m17s   10.244.2.5   node02   <none>           <none>
red-75f847bf79-nqntb    1/1     Running   0          25m     10.244.2.2   node02   <none>           <none>
red-75f847bf79-vg8md    1/1     Running   0          21m     10.244.1.7   node03   <none>           <none>

 

ㅁ What would happen to hr-app if node02 is drained forcefully?

 

[hr-app  will be recreated on other nodes]

[hr-app will be lost forever]

[hr-app will be re-created on master]

[hr-app will continue to run as a Docker container]

 

kubectl drain node02 --force --ignore-daemonsets

node/node02 already cordoned
WARNING: deleting Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: default/hr-app; ignoring DaemonSet-managed Pods: kube-system/kube-flannel-ds-amd64-f56gq, kube-system/kube-proxy-p642d
evicting pod default/hr-app
evicting pod default/blue-746c87566d-hczfq
evicting pod default/red-75f847bf79-nqntb
evicting pod kube-system/coredns-f9fd979d6-xtml4
pod/hr-app evicted
pod/blue-746c87566d-hczfq evicted
pod/coredns-f9fd979d6-xtml4 evicted
pod/red-75f847bf79-nqntb evicted
node/node02 evicted
kubectl get pods

NAME                    READY   STATUS    RESTARTS   AGE
blue-746c87566d-bpfql   1/1     Running   0          30m
blue-746c87566d-plbg4   1/1     Running   0          26m
blue-746c87566d-vcwmc   1/1     Running   0          18s
red-75f847bf79-4t8kq    1/1     Running   0          18s
red-75f847bf79-vg8md    1/1     Running   0          26m

ㅁ Drain node02 and mark it unschedulable

kubectl get nodes

 

ㅁ Node02 has our critical applications. We do not want to schedule any more apps on node03. Mark node03 as unschedulable but do not remove any apps currently running on it.

 

kubectl cordon node03

node/node03 cordoned
kubectl get nodes

NAME           STATUS                     ROLES    AGE   VERSION
controlplane   Ready                      master   46m   v1.19.0
node01         Ready                      <none>   46m   v1.19.0
node02         Ready,SchedulingDisabled   <none>   46m   v1.19.0
node03         Ready,SchedulingDisabled   <none>   46m   v1.19.0
kubectl get pods -o wide

NAME                    READY   STATUS    RESTARTS   AGE   IP           NODE     NOMINATED NODE   READINESS GATES
blue-746c87566d-bpfql   1/1     Running   0          41m   10.244.1.5   node03   <none>           <none>
blue-746c87566d-plbg4   1/1     Running   0          37m   10.244.1.6   node03   <none>           <none>
blue-746c87566d-vcwmc   1/1     Running   0          11m   10.244.3.5   node01   <none>           <none>
red-75f847bf79-4t8kq    1/1     Running   0          11m   10.244.3.6   node01   <none>           <none>
red-75f847bf79-vg8md    1/1     Running   0          37m   10.244.1.7   node03   <none>           <none>

 

'CKA &. CKAD > Cluster Maintenance' 카테고리의 다른 글

Practice Test - Backup and Restore Methods  (0) 2021.03.28
Working with ETCDCTL  (0) 2021.03.28
Backup and Restore Methods  (0) 2021.03.28
Kuberenetes Upgrade Process  (0) 2021.03.28