본문 바로가기

CKA &. CKAD/Networking

Practice Test Service Networking

ㅁ What network range are the nodes in the cluster part of?

kubectl get nodes -o wide
ip addr 

 

[172.17.0.0/16]

 

 

ㅁ What is the range of IP addresses configure for PODs on this cluster?

 

kubectl logs <weave-pod-name> weave -n kube-system

kubectl get pods -n kube-system 

NAME                                   READY   STATUS    RESTARTS   AGE
coredns-f9fd979d6-8fl54                1/1     Running   0          8m12s
coredns-f9fd979d6-ftp82                1/1     Running   2          8m29s
etcd-controlplane                      1/1     Running   0          9m4s
kube-apiserver-controlplane            1/1     Running   0          9m4s
kube-controller-manager-controlplane   1/1     Running   0          9m4s
kube-proxy-bw2zx                       1/1     Running   2          8m42s
kube-proxy-fnlct                       1/1     Running   0          8m55s
kube-scheduler-controlplane            1/1     Running   0          9m3s
weave-net-2smd4                        2/2     Running   1          8m30s
weave-net-99md7                        2/2     Running   3          8m30s

 

kubeclt logs weave-net-2smd4 weave -n kube-system

kubeclt logs weave-net-2smd4 -c weave -n kube-system
DEBU: 2021/03/31 05:34:14.203030 [kube-peers] Checking peer "2a:77:e6:0a:61:29" against list &{[{de:53:1f:08:f3:7b node01}]}
Peer not in list; removing persisted data
INFO: 2021/03/31 05:34:14.325451 Command line options: map[conn-limit:200 datapath:datapath db-prefix:/weavedb/weave-net docker-api: expect-npc:true http-addr:127.0.0.1:6784 ipalloc-init:consensus=1 ipalloc-range:10.32.0.0/12 metrics-addr:0.0.0.0:6782 name:2a:77:e6:0a:61:29 nickname:controlplane no-dns:true no-masq-local:true port:6783]
INFO: 2021/03/31 05:34:14.325504 weave  2.8.1
INFO: 2021/03/31 05:34:14.941231 Bridge type is bridged_fastdp
INFO: 2021/03/31 05:34:14.941260 Communication between peers is unencrypted.
INFO: 2021/03/31 05:34:15.031360 Our name is 2a:77:e6:0a:61:29(controlplane)
INFO: 2021/03/31 05:34:15.031421 Launch detected - using supplied peer list: [172.17.0.38]
INFO: 2021/03/31 05:34:15.031625 Using "no-masq-local" LocalRangeTracker
INFO: 2021/03/31 05:34:15.031641 Checking for pre-existing addresses on weave bridge
INFO: 2021/03/31 05:34:15.034219 [allocator 2a:77:e6:0a:61:29] No valid persisted data
INFO: 2021/03/31 05:34:15.041664 Error checking version: Get "https://checkpoint-api.weave.works/v1/check/weave-net?arch=amd64&flag_docker-version=none&flag_kernel-version=4.15.0-122-generic&os=linux&signature=&version=2.8.1": dial tcp: lookup checkpoint-api.weave.works on 10.96.0.10:53: write udp 172.17.0.37:43063->10.96.0.10:53: write: operation not permitted
INFO: 2021/03/31 05:34:15.046232 [allocator 2a:77:e6:0a:61:29] Initialising via deferred consensus
INFO: 2021/03/31 05:34:15.047730 Sniffing traffic on datapath (via ODP)
INFO: 2021/03/31 05:34:15.048574 ->[172.17.0.38:6783] attempting connection
INFO: 2021/03/31 05:34:15.050262 Listening for HTTP control messages on 127.0.0.1:6784
INFO: 2021/03/31 05:34:15.050284 Listening for metrics requests on 0.0.0.0:6782
INFO: 2021/03/31 05:34:15.061389 ->[172.17.0.38:6783|de:53:1f:08:f3:7b(node01)]: connection ready; using protocol version 2
INFO: 2021/03/31 05:34:15.061509 overlay_switch ->[de:53:1f:08:f3:7b(node01)] using fastdp
INFO: 2021/03/31 05:34:15.061551 ->[172.17.0.38:6783|de:53:1f:08:f3:7b(node01)]: connection added (new peer)
INFO: 2021/03/31 05:34:15.162913 ->[172.17.0.38:6783|de:53:1f:08:f3:7b(node01)]: connection fully established
INFO: 2021/03/31 05:34:15.463432 sleeve ->[172.17.0.38:6783|de:53:1f:08:f3:7b(node01)]: Effective MTU verified at 1438
INFO: 2021/03/31 05:34:15.491963 [kube-peers] Added myself to peer list &{[{de:53:1f:08:f3:7b node01} {2a:77:e6:0a:61:29 controlplane}]}
DEBU: 2021/03/31 05:34:15.500306 [kube-peers] Nodes that have disappeared: map[]
INFO: 2021/03/31 05:34:15.535273 adding entry 10.44.0.0/14 to weaver-no-masq-local of 0
INFO: 2021/03/31 05:34:15.535323 added entry 10.44.0.0/14 to weaver-no-masq-local of 0
10.44.0.0
DEBU: 2021/03/31 05:34:15.666308 registering for updates for node delete events
INFO: 2021/03/31 05:34:16.350198 Discovered remote MAC ae:d8:69:c1:48:0c at de:53:1f:08:f3:7b(node01)
INFO: 2021/03/31 05:34:28.236289 Discovered remote MAC 26:b2:1f:68:c1:72 at de:53:1f:08:f3:7b(node01)
INFO: 2021/03/31 05:34:32.214083 ->[172.17.0.38:6783|de:53:1f:08:f3:7b(node01)]: connection shutting down due to error: read tcp 172.17.0.37:59703->172.17.0.38:6783: read: connection reset by peer
INFO: 2021/03/31 05:34:32.214359 ->[172.17.0.38:6783|de:53:1f:08:f3:7b(node01)]: connection deleted
INFO: 2021/03/31 05:34:32.214613 Removed unreachable peer de:53:1f:08:f3:7b(node01)
INFO: 2021/03/31 05:34:33.312280 ->[172.17.0.38:6783] attempting connection
INFO: 2021/03/31 05:34:33.313028 ->[172.17.0.38:6783] error during connection attempt: dial tcp :0->172.17.0.38:6783: connect: connection refused
INFO: 2021/03/31 05:34:37.094830 ->[172.17.0.38:6783] attempting connection
INFO: 2021/03/31 05:34:37.095796 ->[172.17.0.38:6783] error during connection attempt: dial tcp :0->172.17.0.38:6783: connect: connection refused
INFO: 2021/03/31 05:34:41.963978 ->[172.17.0.38:6783] attempting connection
INFO: 2021/03/31 05:34:41.964813 ->[172.17.0.38:6783] error during connection attempt: dial tcp :0->172.17.0.38:6783: connect: connection refused
INFO: 2021/03/31 05:34:50.232261 ->[172.17.0.38:6783] attempting connection
INFO: 2021/03/31 05:34:50.232982 ->[172.17.0.38:6783] error during connection attempt: dial tcp :0->172.17.0.38:6783: connect: connection refused
INFO: 2021/03/31 05:34:54.013142 ->[172.17.0.38:54313] connection accepted
INFO: 2021/03/31 05:34:54.014580 ->[172.17.0.38:54313|de:53:1f:08:f3:7b(node01)]: connection ready; using protocol version 2
INFO: 2021/03/31 05:34:54.014696 overlay_switch ->[de:53:1f:08:f3:7b(node01)] using fastdp
INFO: 2021/03/31 05:34:54.014747 ->[172.17.0.38:54313|de:53:1f:08:f3:7b(node01)]: connection added (new peer)
INFO: 2021/03/31 05:34:54.125625 overlay_switch ->[de:53:1f:08:f3:7b(node01)] using sleeve
INFO: 2021/03/31 05:34:54.125775 ->[172.17.0.38:54313|de:53:1f:08:f3:7b(node01)]: connection fully established
INFO: 2021/03/31 05:34:54.126820 sleeve ->[172.17.0.38:6783|de:53:1f:08:f3:7b(node01)]: Effective MTU verified at 1438
INFO: 2021/03/31 05:34:54.516230 overlay_switch ->[de:53:1f:08:f3:7b(node01)] using fastdp
INFO: 2021/03/31 05:34:56.030257 Discovered remote MAC de:53:1f:08:f3:7b at de:53:1f:08:f3:7b(node01)
INFO: 2021/03/31 05:35:02.974135 Discovered remote MAC 16:84:fa:ec:90:c2 at de:53:1f:08:f3:7b(node01)
INFO: 2021/03/31 05:35:12.158131 Discovered remote MAC ae:d8:69:c1:48:0c at de:53:1f:08:f3:7b(node01)

ip allocation 부분 확인

 

ㅁ What is the IP Range configured for the services within the cluster?

[10.96.0.0/12]

$ cat /etc/kubernetes/manifests/kube-apiserver.yaml | grep cluster

- --service-cluster-ip-range=10.96.0.0/12

 

ㅁ How many kube-proxy pods are deployed in this cluster

$ kubectl get pods -n kube-system | grep kube-proxy 

 

ㅁ What type of proxy is the kube-proxy configured to use?

 

[iptables]

$ kubectl logs kube-proxy-bw2zx -n kube-system
I0331 05:34:48.766057       1 node.go:136] Successfully retrieved node IP: 172.17.0.38
I0331 05:34:48.767483       1 server_others.go:111] kube-proxy node IP is an IPv4 address (172.17.0.38), assume IPv4 operation
W0331 05:34:48.826573       1 server_others.go:579] Unknown proxy mode "", assuming iptables proxy
I0331 05:34:48.826835       1 server_others.go:186] Using iptables Proxier.
I0331 05:34:48.827267       1 server.go:650] Version: v1.19.0
I0331 05:34:48.827816       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0331 05:34:48.829131       1 config.go:315] Starting service config controller
I0331 05:34:48.829181       1 shared_informer.go:240] Waiting for caches to sync for service config
I0331 05:34:48.829221       1 config.go:224] Starting endpoint slice config controller
I0331 05:34:48.829266       1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config
I0331 05:34:48.929506       1 shared_informer.go:247] Caches are synced for endpoint slice config 
I0331 05:34:48.929510       1 shared_informer.go:247] Caches are synced for service config 

 

ㅁ How does this Kubernetes cluster ensure that a kube-proxy pod runs on all nodes in the cluster?

 

Inspect the kube-proxy pods and try to identify how they are deploy

 

[kubelet ensure this is done]

[Using deployments]

[Using a custom script]

[using daemonset]

 

$ kubectl get daemonset -n kube-system

'CKA &. CKAD > Networking' 카테고리의 다른 글

How kubernetes implement DNS  (0) 2021.03.31
Cluster DNS  (0) 2021.03.31
Service Networking  (0) 2021.03.31
Practice Test - Networking Weave  (0) 2021.03.31
Practice Test - Deploy Network Solution  (0) 2021.03.31