- Take me to the Mock Exam 1
Solutions for lab - Mock Exam 1:
With questions where you need to modify API server, you can use this resource to diagnose a failure of the API server to restart.
-
A pod has been created in the
omni
namespace. However, there are a couple of issues with it.- The pod has been created with more permissions than it needs.
- It allows read access in the directory
/usr/share/nginx/html/internal
causing an Internal Site to be accessed publicly.
To check this, click on the button called Site (above the terminal) and add /internal/ to the end of the URL. Use the below recommendations to fix this.
- Use the AppArmor profile created at
/etc/apparmor.d/frontend
to restrict the internal site. - There are several service accounts created in the
omni
namespace. Apply the principle of least privilege and use the service account with the minimum privileges (excluding thedefault
service account). - Once the pod is recreated with the correct service account, delete the other unused service accounts in
omni
namespace (excluding thedefault
service account).
-
Use the
omni
namespace to save on typingkubectl config set-context --current --namespace omni
-
AppArmor Profile
Load the AppArmor profile into the kernel
apparmor_parser -q /etc/apparmor.d/frontend
-
Service Account
To find the service account with the least privileges, we need to examine the roles that are bound to these service acccounts. This will determine what privilege they have.
-
Find the service accounts
kubectl get sa
There are 3 service accounts exculding the
default
. These are the ones we are concerned with. -
Find the bindings
kubectl get rolebindings
Notice there are 2 bindings, to the roles
fe
andfrontend
-
Examine permissions of roles
``` kubectl describe role fe kubectl describe role frontend ```
-
See which service accounts these roles are bound to
``` kubectl describe rolebinding fe kubectl describe rolebinding frontend ```
Notice that these roles are bound to service accounts
fe
andfrontend
respectively. No role is bound to service accountfrontend-default
. This means that this service account is the one with least privilege by virtue of the fact that it has no binding and therefore no permissions at all. -
Recreate the pod with the correct service account, and also apply the AppArmor profile
apiVersion: v1 kind: Pod metadata: labels: run: nginx name: frontend-site namespace: omni spec: securityContext: appArmorProfile: # Apply apparmor profle localhostProfile: restricted-frontend type: Localhost serviceAccount: frontend-default # Use the service account with least privileges containers: - image: nginx:alpine name: nginx volumeMounts: - mountPath: /usr/share/nginx/html name: test-volume volumes: - name: test-volume hostPath: path: /data/pages type: Directory
Note that older versions of Kubernetes used the following annotation to apply profiles, however now it is part of
securityContext
and can be applied at pod or container level. Note that while the annotation still works, a warning will be printed when the pod is created.container.apparmor.security.beta.kubernetes.io/<container-name>
-
-
Delete the unused service accounts in the
omni
namespace.kubectl -n omni delete sa frontend kubectl -n omni delete sa fe
-
A pod has been created in the
orion
namespace. It uses secrets as environment variables. Extract the decoded secret for theCONNECTOR_PASSWORD
and place it under/root/CKS/secrets/CONNECTOR_PASSWORD
.You are not done, instead of using secrets as an environment variable, mount the secret as a read-only volume at path
/mnt/connector/password
that can be then used by the application inside.
-
Extract the secret
mkdir -p /root/CKS/secrets/ kubectl -n orion get secrets a-safe-secret -o jsonpath='{.data.CONNECTOR_PASSWORD}' | base64 -d > /root/CKS/secrets/CONNECTOR_PASSWORD
-
Mount the secret
Recreate the pod, mounting the secret as a read-only volume at the given path
apiVersion: v1 kind: Pod metadata: labels: name: app-xyz name: app-xyz namespace: orion spec: containers: - image: nginx name: app-xyz ports: - containerPort: 3306 volumeMounts: - name: secret-volume mountPath: /mnt/connector/password readOnly: true volumes: - name: secret-volume secret: secretName: a-safe-secret
-
-
A number of pods have been created in the
delta
namespace. Using the trivy tool, which has been installed on the controlplane, identify and delete pods except the one with least number ofCRITICAL
level vulnerabilities.-
List pods with images for reference
kubectl get pods -n delta -o custom-columns='NAME:.spec.containers[0].name,IMAGE:.spec.containers[0].image'
-
Scan each image using
trivy image scan
For each image, replace
<image-name>
with image from the step above and run the command:trivy i --severity CRITICAL <image-name> | grep Total
Or, do it using a one-liner for loop.
for i in $(kubectl -n delta get pods -o json | jq -r '.items[].spec.containers[].image') ; do echo $i ; trivy i --severity CRITICAL $i 2>&1 | grep Total ; done
-
Delete vulnerable pods
If the image has HIGH or CRITICAL vulnerabilities, delete the associated pod.Notice that image
httpd:2-alpine
has zero vulnerabilities, so we must delete the pods that do not use this imagekubectl -n delta delete pod simple-webapp-1 kubectl -n delta delete pod simple-webapp-3 kubectl -n delta delete pod simple-webapp-4
-
-
Create a new pod called
audit-nginx
in the default namespace using thenginx
image. Secure the syscalls that this pod can use by using theaudit.json
seccomp profile in the pod's security context.The
audit.json
is provided at/root/CKS
directory. Make sure to move it under theprofiles
directory inside the default seccomp directory before creating the pod
-
Place
audit.json
into the default seccomp directory.Know that this directory is in kubelet's configuration directory which ia nomally
/var/lib/kubelet
. You can verify this by looking for where it loads its config file fromps aux | grep kubelet | grep -- --config
Copy the
audit.json
seccomp profile to/var/lib/kubelet/seccomp/profiles
:cp /root/CKS/audit.json /var/lib/kubelet/seccomp/profiles
-
Create the pod
apiVersion: v1 kind: Pod metadata: labels: run: nginx name: audit-nginx namespace : default spec: securityContext: seccompProfile: type: Localhost localhostProfile: profiles/audit.json containers: - image: nginx name: nginx
-
-
The CIS Benchmark report for the
Controller Manager
andScheduler
is available at the tab calledCIS Report 1
.Inspect this report and fix the issues reported as
FAIL
.
-
Examine report
Click on
CIS Reoprt 1
above the terminalNote the failures at 1.3.2 and 1.4.1
-
Fix issues
For both
kube-controller-manager
andkube-scheduler
, edit the static manifest file in/etc/kubernetes/manifests
and add the following to the command arguments:- --profiling=false
Make sure both pods restart
-
-
There is something suspicious happening with one of the pods running an
httpd
image in this cluster.
The Falco service shows frequent alerts that start with:File below a known binary directory opened for writing
.Identify the rule causing this alert and update it as per the below requirements:
- Output should be displayed as:
CRITICAL File below a known binary directory opened for writing (user_id=user_id file_updated=file_name command=command_that_was_run)
- Alerts are logged to
/opt/security_incidents/alerts.log
Do not update the default rules file directly. Rather use the
falco_rules.local.yaml
file to override.Note: Once the alert has been updated, you may have to wait for up to a minute for the alerts to be written to the new log location.
-
Create
/opt/security_incidents
mkdir -p /opt/security_incidents
-
Enable file_output in
/etc/falco/falco.yaml
file_output: enabled: true keep_alive: false filename: /opt/security_incidents/alerts.log
-
Add the updated rule under the
/etc/falco/falco_rules.local.yaml
Find the relevant rule in
falco_rules.yaml
, copy it, paste it intofalco_rules.local.yaml
and then modify it to get the requested output:Refer to the field reference: https://falco.org/docs/reference/rules/supported-fields/
- rule: Write below binary dir desc: an attempt to write to any file below a set of binary directories condition: > bin_dir and evt.dir = < and open_write and not package_mgmt_procs and not exe_running_docker_save and not python_running_get_pip and not python_running_ms_oms and not user_known_write_below_binary_dir_activities output: > File below a known binary directory opened for writing (user_id=%user.uid file_updated=%fd.name command=%proc.cmdline) priority: CRITICAL tags: [filesystem, mitre_persistence]
-
To perform hot-reload falco use `kill -1` (SIGHUP) on controlplane node
kill -1 $(pidof falco)
-
Verify falco is running, i.e. you didn't make some syntax error that crashed it
systemctl status falco
-
Check the new log file. It may take up to a minute for events to be logged.
cat /opt/security_incidents/alerts.log
- Output should be displayed as:
-
A pod called
busy-rx100
has been created in theproduction
namespace. Secure the pod by recreating it using theruntimeClass
calledgvisor
. You may delete and recreate the pod.Simply recreate the pod using the YAML file as below. We onlt need to add
runtimeClassName
apiVersion: v1 kind: Pod metadata: labels: run: busy-rx100 name: busy-rx100 namespace: production spec: runtimeClassName: gvisor containers: - image: nginx name: busy-rx100
Note that the pod may not start due to the fact that
gvisor
runtime is not installed on this system. That's OK as what is being marked is that the pod YAML is correct. -
We need to make sure that when pods are created in this cluster, they cannot use the latest image tag, irrespective of the repository being used.
To achieve this, a simple Admission Webhook Server has been developed and deployed. A service called image-bouncer-webhook is exposed in the cluster internally. This Webhook server ensures that the developers of the team cannot use the latest image tag. Make use of the following specs to integrate it with the cluster using an ImagePolicyWebhook:
- Create a new admission configuration file at /etc/admission-controllers/admission-configuration.yaml
- The kubeconfig file with the credentials to connect to the webhook server is located at
/root/CKS/ImagePolicy/admission-kubeconfig.yaml
. Note: The directory/root/CKS/ImagePolicy/
has already been mounted on the kube-apiserver at path/etc/admission-controllers
so use this path to store the admission configuration. - Make sure that if the latest tag is used, the request must be rejected at all times.
- Enable the Admission Controller.
-
Create the admission-configuration inside
/root/CKS/ImagePolicy
directory asadmission-configuration.yaml
apiVersion: apiserver.config.k8s.io/v1 kind: AdmissionConfiguration plugins: - name: ImagePolicyWebhook configuration: imagePolicy: kubeConfigFile: /etc/admission-controllers/admission-kubeconfig.yaml allowTTL: 50 denyTTL: 50 retryBackoff: 500 defaultAllow: false
Just create the file. You cannot apply an
AdmissionConfiguration
with kubectl. It's configuration, not a resource!Note that the
/root/CKS/ImagePolicy
is mounted at the path/etc/admission-controllers
directory in the kube-apiserver. So, you can directly place the files under/root/CKS/ImagePolicy
.
Snippet of the volume and volumeMounts (Note these are already present in apiserver manifest as shown below, so you do not need to add them)containers: - # other stuff omitted for brevity volumeMounts: - mountPath: /etc/admission-controllers name: admission-controllers readOnly: true volumes: - hostPath: path: /root/CKS/ImagePolicy/ type: DirectoryOrCreate name: admission-controllers
-
Update the kube-apiserver command flags and add
ImagePolicyWebhook
to theenable-admission-plugins
flag- --admission-control-config-file=/etc/admission-controllers/admission-configuration.yaml - --enable-admission-plugins=NodeRestriction,ImagePolicyWebhook
-
Wait for the API server to restart. May take up to a minute.
You can use the folloowing command to monitor the containers
watch crictl ps
CTRL + C
exits the watch. -
Finally, update the pod with the correct image
kubectl set image -n magnum pods/app-0403 app-0403=gcr.io/google-containers/busybox:1.27