본문 바로가기

CKA &. CKAD/Networking

Practice Test CoreDNS in Kubernetes

ㅁ Identify the DNS solution implemented in this cluster

 

kubectl get pods -n kube-system

에서 DNS로 기동되어 있는 POD를 확인

 

$ kubectl get pods -n kube-system

NAME                                   READY   STATUS    RESTARTS   AGE
coredns-f9fd979d6-45wx2                1/1     Running   0          36m
coredns-f9fd979d6-rqgxt                1/1     Running   0          36m
etcd-controlplane                      1/1     Running   0          37m
kube-apiserver-controlplane            1/1     Running   0          37m
kube-controller-manager-controlplane   1/1     Running   0          37m
kube-flannel-ds-amd64-7grz2            1/1     Running   2          36m
kube-flannel-ds-amd64-9fn78            1/1     Running   0          36m
kube-proxy-65pzq                       1/1     Running   2          36m
kube-proxy-d6g82                       1/1     Running   0          36m
kube-scheduler-controlplane            1/1     Running   0          37m

 

 

ㅁ How many pods of the DNS server are deployed?

 

[2]

 

kubectl get pod -n kube-system | grep coredns | wc -l

 

ㅁ What is the name of the service created for accessing CoreDNS?

$ kubectl get service -n kube-system

를 통해 대상 서비스 확인

$ kubectl get service -n kube-system
NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
kube-dns   ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   40m

 

ㅁ What is the IP of the CoreDNS server that should be configured on PODs to resolve services?

$ kubectl get service -n kube-system

를 통해 대상 서비스 확인

$ kubectl get service -n kube-system
NAME       TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)                  AGE
kube-dns   ClusterIP   10.96.0.10   <none>        53/UDP,53/TCP,9153/TCP   40m

 

ㅁ Where is the configuration file located for configuring the CoreDNS service?

kubectl describe deployments coredns -n kube-system 
Name:                   coredns
Namespace:              kube-system
CreationTimestamp:      Wed, 31 Mar 2021 06:07:12 +0000
Labels:                 k8s-app=kube-dns
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               k8s-app=kube-dns
Replicas:               2 desired | 2 updated | 2 total | 2 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  1 max unavailable, 25% max surge
Pod Template:
  Labels:           k8s-app=kube-dns
  Service Account:  coredns
  Containers:
   coredns:
    Image:       k8s.gcr.io/coredns:1.7.0
    Ports:       53/UDP, 53/TCP, 9153/TCP
    Host Ports:  0/UDP, 0/TCP, 0/TCP
    Args:
      -conf
      /etc/coredns/Corefile
    Limits:
      memory:  170Mi
    Requests:
      cpu:        100m
      memory:     70Mi
    Liveness:     http-get http://:8080/health delay=60s timeout=5s period=10s #success=1 #failure=5
    Readiness:    http-get http://:8181/ready delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:  <none>
    Mounts:
      /etc/coredns from config-volume (ro)
  Volumes:
   config-volume:
    Type:               ConfigMap (a volume populated by a ConfigMap)
    Name:               coredns
    Optional:           false
  Priority Class Name:  system-cluster-critical
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   coredns-f9fd979d6 (2/2 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  46m   deployment-controller  Scaled up replica set coredns-f9fd979d6 to 2

 

ㅁ How is the Corefile passed in to the CoreDNS POD?

 

[Stored on the kube master]

[Corefile comes built-in with CoreDNS pod]

[pulled from git]

[Cofigured as a ConfigMap object]

 

 

ㅁ What is the name of the ConfigMap object created for Corefile?

 

[dns-config]

[corefile-config]

[coredns]

[kube-proxy]

 

$ kubectl get configmap -n kube-system

ㅁ What is the root domain/zone configured for this kubernetes cluster?

 

[kubernetes]

[cluster-a.local]

[cluster.local]

[root]

 

$ kubectl describe configmap coredns -n kube-system
Name:         coredns
Namespace:    kube-system
Labels:       <none>
Annotations:  <none>

Data
====
Corefile:
----
.:53 {
    errors
    health {
       lameduck 5s
    }
    ready
    kubernetes cluster.local in-addr.arpa ip6.arpa {
       pods insecure
       fallthrough in-addr.arpa ip6.arpa
       ttl 30
    }
    prometheus :9153
    forward . /etc/resolv.conf {
       max_concurrent 1000
    }
    cache 30
    loop
    reload
    loadbalance
}

Events:  <none>

 

ㅁ What name can be used to access the hr web server from the test Application

 

[web-service]

 

kubectl get service

 

kubectl describe service web-service

 

ㅁ Which of the name CANNOT be used to access the HR service from the test pod?

 

[web-service.default.pod]

[web-service.default.svc]

[web-service.default]

[web-service]

 

ㅁ Which of the below name can be used to access the payroll service from the test application?

 

[web-service.payroll]

[web]

[web-service.default]

[web-service]

$ kubectl get namespace

NAME              STATUS   AGE
default           Active   67m
hr                Active   32m
kube-node-lease   Active   67m
kube-public       Active   67m
kube-system       Active   67m
payroll           Active   32m


$ kubectl get pods -n payroll
NAME   READY   STATUS    RESTARTS   AGE
web    1/1     Running   0          32m


$ kubectl get service -n payroll

NAME          TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
web-service   ClusterIP   10.103.60.113   <none>        80/TCP    32m

 

ㅁ Which of the below name CANNOT be used to access the payroll service from the test application?

 

[web-service.payroll.svc.cluster.local]

[web-service.payroll.svc.cluster]

[web-service.payroll.svc]

[web0service.payroll]

 

$ kubectl exec test -- ping web-service.payroll
PING web-service.payroll (10.103.60.113): 56 data bytes

$ kubectl exec test -- ping web-service.payroll.svc
PING web-service.payroll.svc (10.103.60.113): 56 data bytes

$ kubectl exec test -- ping web-service.payroll.svc.cluster
ping: bad address 'web-service.payroll.svc.cluster'
command terminated with exit code 1

$ kubectl exec test -- ping web-service.payroll.svc.cluster.local
PING web-service.payroll.svc.cluster.local (10.103.60.113): 56 data bytes

 

ㅁ We just deployed a web server -webapp - that accesses a database mysql -server, However the web server is failing to connect to the database server. Troubleshoot and fix the issue

 

They could be in different namespaces. First locate the applications. The web server interface can be seen by clicking the tab  Web Server at the top of you terminal

 

$ kubectl get service --all-namespaces

NAMESPACE     NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                  AGE
default       kubernetes       ClusterIP   10.96.0.1       <none>        443/TCP                  79m
default       test-service     NodePort    10.103.15.18    <none>        80:30080/TCP             44m
default       web-service      ClusterIP   10.96.184.52    <none>        80/TCP                   44m
default       webapp-service   NodePort    10.99.246.14    <none>        8080:30082/TCP           5m23s
kube-system   kube-dns         ClusterIP   10.96.0.10      <none>        53/UDP,53/TCP,9153/TCP   79m
payroll       mysql            ClusterIP   10.96.154.200   <none>        3306/TCP                 5m23s
payroll       web-service      ClusterIP   10.103.60.113   <none>        80/TCP                   44m

 

$ kubectl get deployment 

NAMESPACE     NAME      READY   UP-TO-DATE   AVAILABLE   AGE
default       webapp    1/1     1            1           9m24s
$ kubectl edit deployment webapp

나온 내용의 spec.containers.env 부분에서

    spec:
      containers:
      - env:
        - name: DB_Host
          value: mysql
        - name: DB_User
          value: root
        - name: DB_Password
          value: paswrd

아래와 같이 수정

    spec:
      containers:
      - env:
        - name: DB_Host
          value: mysql.payroll
        - name: DB_User
          value: root
        - name: DB_Password
          value: paswrd

ㅁ From the hr pod nslookup the mysql service and redirect the output to a file /root/CKS/nslookup.out

$ kubectl exec hr -- nslookup mysql.payroll > /root/CKA/nslookup.out
$ cat /root/CKA/nslookup.out

Server:         10.96.0.10
Address:        10.96.0.10#53

Name:   mysql.payroll.svc.cluster.local
Address: 10.96.154.200

'CKA &. CKAD > Networking' 카테고리의 다른 글

Practice Test - CKA -Ingress Networking  (0) 2021.03.31
INGRESS  (0) 2021.03.31
How kubernetes implement DNS  (0) 2021.03.31
Cluster DNS  (0) 2021.03.31
Practice Test Service Networking  (0) 2021.03.31