ㅁ How many Nodes are part of this cluster?
(Including master and worker nodes)
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
controlplane Ready master 3m33s v1.19.0
node01 Ready <none> 3m3s v1.19.0
node02 Ready <none> 3m4s v1.19.0
node03 Ready <none> 3m3s v1.19.0
ㅁ What is the Networking Solution used by the cluster?
$ ls /etc/cni/net.d
10-weave.conflist
ㅁ How many weave agents/peers are deployed in this cluster?
$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-f9fd979d6-mxmv5 1/1 Running 0 4m54s
coredns-f9fd979d6-pskwm 1/1 Running 1 4m37s
etcd-controlplane 1/1 Running 0 5m37s
kube-apiserver-controlplane 1/1 Running 0 5m37s
kube-controller-manager-controlplane 1/1 Running 0 5m37s
kube-proxy-gw8h8 1/1 Running 2 5m18s
kube-proxy-kqj8k 1/1 Running 2 5m19s
kube-proxy-s7gb5 1/1 Running 0 5m29s
kube-proxy-t8cp5 1/1 Running 2 5m18s
kube-scheduler-controlplane 1/1 Running 0 5m37s
weave-net-7qlqp 2/2 Running 3 4m56s
weave-net-bqrrl 2/2 Running 3 4m56s
weave-net-l6z8l 2/2 Running 3 4m55s
weave-net-sr4zh 2/2 Running 0 4m56s
ㅁ On which nodes are the weave peers present?
kubectl get pods -n kube-system -o wide | grep weave
ㅁ Identify the name of the bridge network/interface created by weave on eache mode
ip link show
ㅁ What is the POD IP address range configured by weave?
ip address show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 02:42:ac:11:00:0c brd ff:ff:ff:ff:ff:ff
inet 172.17.0.12/16 brd 172.17.255.255 scope global ens3
valid_lft forever preferred_lft forever
inet6 fe80::42:acff:fe11:c/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:b3:ac:63:d3 brd ff:ff:ff:ff:ff:ff
inet 172.18.0.1/24 brd 172.18.0.255 scope global docker0
valid_lft forever preferred_lft forever
7: datapath: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue state UNKNOWN group default qlen 1000
link/ether d6:93:c7:7e:ec:65 brd ff:ff:ff:ff:ff:ff
inet6 fe80::d493:c7ff:fe7e:ec65/64 scope link
valid_lft forever preferred_lft forever
9: weave: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue state UP group default qlen 1000
link/ether 2a:77:e6:0a:61:29 brd ff:ff:ff:ff:ff:ff
inet 10.32.0.1/12 brd 10.47.255.255 scope global weave
valid_lft forever preferred_lft forever
inet6 fe80::2877:e6ff:fe0a:6129/64 scope link
valid_lft forever preferred_lft forever
11: vethwe-datapath@vethwe-bridge: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master datapath state UP group default
link/ether 6a:b6:0d:9c:99:b2 brd ff:ff:ff:ff:ff:ff
inet6 fe80::68b6:dff:fe9c:99b2/64 scope link
valid_lft forever preferred_lft forever
12: vethwe-bridge@vethwe-datapath: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue master weave state UP group default
link/ether 9a:3d:c4:97:12:c2 brd ff:ff:ff:ff:ff:ff
inet6 fe80::983d:c4ff:fe97:12c2/64 scope link
valid_lft forever preferred_lft forever
13: vxlan-6784: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65535 qdisc noqueue master datapath state UNKNOWN group default qlen 1000
link/ether 8e:0e:c8:32:3b:b9 brd ff:ff:ff:ff:ff:ff
inet6 fe80::8c0e:c8ff:fe32:3bb9/64 scope link
valid_lft forever preferred_lft forever
ip adderss show weave
9: weave: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1376 qdisc noqueue state UP group default qlen 1000
link/ether 2a:77:e6:0a:61:29 brd ff:ff:ff:ff:ff:ff
inet 10.32.0.1/12 brd 10.47.255.255 scope global weave
valid_lft forever preferred_lft forever
inet6 fe80::2877:e6ff:fe0a:6129/64 scope link
valid_lft forever preferred_lft forever
ㅁ What is the default gateway configured on the PODs scheduled on node03?
controlplan에서의 default gateway
ip route
ip route
default via 172.17.0.1 dev ens3
10.32.0.0/12 dev weave proto kernel scope link src 10.32.0.1
10.244.1.0/24 via 172.17.0.40 dev ens3
10.244.2.0/24 via 172.17.0.42 dev ens3
10.244.3.0/24 via 172.17.0.21 dev ens3
172.17.0.0/16 dev ens3 proto kernel scope link src 172.17.0.12
172.18.0.0/24 dev docker0 proto kernel scope link src 172.18.0.1 linkdown
node03에서의 ip route
ssh node03 ip route
ssh node03 ip route
default via 172.17.0.1 dev ens3
10.32.0.0/12 dev weave proto kernel scope link src 10.40.0.0
10.244.0.0/24 via 172.17.0.12 dev ens3
10.244.1.0/24 via 172.17.0.40 dev ens3
10.244.3.0/24 via 172.17.0.21 dev ens3
172.17.0.0/16 dev ens3 proto kernel scope link src 172.17.0.42
172.18.0.0/24 dev docker0 proto kernel scope link src 172.18.0.1 linkdown
[172.17.0.1]
[10.32.0.1]
[10.40.0.0]
[172.18.0.1]
실제 IP 할당을 확인하기 위하여 임시 POD를 생성해 본다.
$ kubectl run busybox --image=busybox --command sleep 1000 --dry-run=client -o yaml > pod.yaml
vi pod.yaml 에서 spec.nodeName: node03 을 추가한다.
vi pod.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
run: busybox
name: busybox
spec:
nodeName: node03
containers:
- command:
- sleep
- "1000"
image: busybox
name: busybox
busybox pod 생성
kubectl apply -f pod.yaml
pod shell에서 직접 ip 확인
kubectl exec -ti busybox -- sh
ip r
'CKA &. CKAD > Networking' 카테고리의 다른 글
Practice Test Service Networking (0) | 2021.03.31 |
---|---|
Service Networking (0) | 2021.03.31 |
Practice Test - Deploy Network Solution (0) | 2021.03.31 |
Practice Test - Explore CNI weave (0) | 2021.03.31 |
WeaveWorks (CNI) (0) | 2021.03.31 |