-
A pod with the name
pod-a
, running the httpd sever image -
A pod with the name
pod-b
, running the nginx server image as well as the alpine image
show
k create ns ckad-ns1
k run pod-a --image=httpd --restart Never -n ckad-ns1
apiVersion: v1
kind: Pod
metadata:
name: pod-b
namespace: ckad-ns1
spec:
containers:
- name: nginx
image: nginx
- name: alpine
image: alpine
command: ['sleep', '3600']
The pod my-server is running 3 containers: file-server, log-server, and db-server. When starting it, the log-server fails. Which command should you use to analyze why it is going wrong?
show
kubectl -n ns1 logs my-server -c log-server
-
The webserver should be offering its services on port 80 and run in the
ckad-ns3
namespace -
This pod should use a readiness probe that gets 60 seconds to complete
-
The probe should check the availability of the webserver document root (path
/var/www/html
) before start and during operation as well
show
apiVersion: v1
kind: Pod
metadata:
name: webserver
namespace: ckad-ns3
spec:
containers:
- name: webserver
image: nginx
ports:
- containerPort: 80
readinessProbe:
exec:
command: ['ls', '/']
initialDelaySeconds: 60
periodSeconds: 60
-
It starts 5 replicas that run the
nginx:1.8
image -
Each pod has the label
app=webshop
-
Create the Deployment such that while updating the existing pods are terminated before new pods are created to replace them
-
The Deployment itself should use the label
service=nginx
show
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
service: nginx
spec:
replicas: 5
strategy:
type: Recreate
selector:
matchLabels:
app: webshop
template:
metadata:
labels:
app: webshop
spec:
containers:
- name: nginx
image: nginx:1.8
ports:
- containerPort: 80
In the ckad-ns6
NameSpace, create a Deployment that runs the nginx:1.9
image and give it the name nginx-deployment
:
-
Ensure it run 3 replicas
-
After verifying that the Deployment runs successfully, expose it such that users that are external to the cluster can reach it, using the k8s Service
show
k create ns ckad-ns6
k -n ckad-ns6 create deployment nginx-deployment --image nginx:1.9 --replicas 3
k -n ckad-ns6 expose deployment nginx-deployment --port 80 --type NodePort
-
The pods may be dummies and dont really have to provide specific functionality as long as they will running for at least
3600
seconds -
One pod similates running a database, the other pod simulates running a webserver
-
Use a NetworkPolicy to restrict traffic between pods in the following way:
-
Incoming and outgoing traffic to the webserver is allowed without any restrictions
-
Only the webserver is allowed to access the database
-
No outgoing traffic is allowed from the database pod
-
show
apiVersion: v1
kind: Pod
metadata:
name: web
labels:
app: web
spec:
containers:
- name: web
image: alpine
command: ['sleep', '3600']
---
apiVersion: v1
kind: Pod
metadata:
name: db
labels:
app: db
spec:
containers:
- name: db
image: alpine
command: ['sleep', '3600']
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: db-np
spec:
policyTypes:
- Ingress
- Egress
podSelector:
matchLabels:
app: db
ingress:
- from:
- podSelector:
matchLabels:
app: web
-
Create a PV with the name
1312-pv
. It should provide2 GB
of storage and read/write access tot multiple clients simultaneously. Use any storage type u like -
Next, create a PVC that request 1 GB from any PV that allows multiple clients simultaneous read/write access. The name of the object should be
1312-pvc
-
Finally, create a Pod with the name
1312-pod
that uses this persistent volume. It should run annginx
image, and mount the volume on the directory/webdata
show
apiVersion: v1
kind: PersistentVolume
metadata:
name: 1312-pv
labels:
type: local
namespace: ckad-1312
spec:
storageClassName: manual
capacity:
storage: 2Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: 1312-pvc
namespace: ckad-1312
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Pod
metadata:
name: 1312-pod
namespace: ckad-1312
spec:
volumes:
- name: 1312-storage
persistentVolumeClaim:
claimName: 1312-pvc
containers:
- name: 1312-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/webdata"
name: 1312-storage
Create a cronjob called dice
that runs every one minute
. Use the Pod template located at /root/throw-a-dice
. The image throw-dice
randomly returns a value between 1 and 6. The result of 6 is considered success
and all others are failure
. The job should be non-parallel
and complete the task once
. Use a backoffLimit
of 25
. If the task is not completed within 20 seconds
the job should fail and pods should be terminated.
- You don't have to wait for the job completion. As long as the cronjob has been created as per the requirements.
show
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: dice
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
completions: 1
backoffLimit: 25 # This is so the job does not quit before it succeeds.
activeDeadlineSeconds: 20
template:
spec:
containers:
- name: dice
image: kodekloud/throw-dice
restartPolicy: Never
We have deployed a few pods in this cluster in various namespaces. Inspect them and identify the pod which is not in a Ready
state. Troubleshoot and fix the issue.
-
Next, add a check to restart the container on the same pod if the command
ls /var/www/html/file_check
fails. This check shouldstart after a delay of 10 seconds
andrun every 60 seconds
. -
You may delete and recreate the object. Ignore the warnings from the probe.
show
-
The pod
nginx1401
is not in a Ready state -
The
Readiness
Probe has failed because the container expose port9080
while readiness probe on port8080
. Change readiness to port `9080 -
Add liveness
apiVersion: v1
kind: Pod
metadata:
labels:
run: nginx
name: nginx1401
namespace: dev1401
spec:
containers:
- image: kodekloud/nginx
imagePullPolicy: IfNotPresent
name: nginx
ports:
- containerPort: 9080
protocol: TCP
readinessProbe:
httpGet:
path: /
port: 9080
livenessProbe:
exec:
command:
- ls
- /var/www/html/file_check
initialDelaySeconds: 10
periodSeconds: 60
Create a pod called my-busybox
in the dev2406
namespace using the busybox
image. The container should be called secret
and should sleep for 3600 seconds.
-
The container should mount a read-only secret volume called
secret-volume
at the path/etc/secret-volume
. The secret being mounted has already been created for you and is calleddotfile-secret
. -
Make sure that the pod is scheduled on controlplane and no other node in the cluster.
show
- Check the labels of nodes
k get node --show-labels
- Create pod template:
k -n dev2406 run my-busybox --image busybox --dry-run -o yaml -- sleep 3600 > pod.yaml
-
Modify container name
-
Mount secret to volume
-
Add toleration
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: my-busybox
name: my-busybox
namespace: dev2406
spec:
volumes:
- name: secret-volume
secret:
secretName: dotfile-secret
nodeSelector:
kubernetes.io/hostname: controlplane
tolerations:
- key: "node-role.kubernetes.io/master"
operator: "Exists"
effect: "NoSchedule"
containers:
- command:
- sleep
args:
- "3600"
image: busybox
name: secret
volumeMounts:
- name: secret-volume
readOnly: true
mountPath: "/etc/secret-volume"
Create a single ingress resource called ingress-vh-routing
. The resource should route HTTP traffic to multiple hostnames as specified below:
-
The service
video-service
should be accessible on http://watch.ecom-store.com:30093/video -
The service
apparels-service
should be accessible on http://apparels.ecom-store.com:30093/wear
Here 30093
is the port used by the Ingress Controller
show
---
kind: Ingress
apiVersion: networking.k8s.io/v1
metadata:
name: ingress-vh-routing
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: watch.ecom-store.com
http:
paths:
- pathType: Prefix
path: "/video"
backend:
service:
name: video-service
port:
number: 8080
- host: apparels.ecom-store.com
http:
paths:
- pathType: Prefix
path: "/wear"
backend:
service:
name: apparels-service
port:
number: 8080
A pod called dev-pod-dind-878516
has been deployed in the default namespace. Inspect the logs for the container called log-x
and redirect the warnings to /opt/dind-878516_logs.txt
on the controlplane
node
show
kubectl logs dev-pod-dind-878516 -c log-x | grep WARNING > /opt/dind-878516_logs.txt
Given a container that writes a log file in format A and a container that converts log files from format A to format B, create a deployment that runs both containers such that the log files from the first container are converted by the second container, emitting logs in format B.
-
Task:
-
Create a deployment named
deployment-xyz
in the default namespace, that: -
Includes a primary
lfccncf/busybox:1
container, namedlogger-dev
-
includes a sidecar
lfccncf/fluentd:v0.12
container, namedadapter-zen
-
Mounts a shared volume
/tmp/log
on both containers, which does not persist when the pod is deleted -
Instructs the
logger-dev
container to run the command which should output logs to /tmp/log/input.log in plain text format.
while true; do echo 'i luv cncf' >> /tmp/log/input.log; sleep 10; done
- The
adapter-zen
sidecar container should read/tmp/log/input.log
and output the data to/tmp/log/output.*
in Fluentd JSON format. Note that no knowledge of Fluentd is required to complete this task: all you will need to achieve this is to create the ConfigMap from the spec file provided at/opt/KDMC00102/fluentd-configmap.yaml
, and mount that ConfigMap to/fluentd/etc
in theadapter-zen
sidecar container
-
show
k create deploy deployment-xyz --image lfccncf/busybox:1 --dry-run -oyaml > deploy.yaml