This tutorial familiarizes participants with basic Kubernetes Operations, such as setting resource allocations for containers and scaling up/down a pod.
- Choose the deathstarbench-k8s-setup profile, which is part of the Tracing-Pythia project.
- Enter the profile configuration options
- DeathStarBench in this profile is pre-built to work with Intel Machines, so choose machines w/Intel CPUs. Good choices include c6525-25g and c6525-100g in CloudLab Utah.
- You may or may not wish to increase the number of nodes.
- You can leave the rest of the options as default.
- Read the README for deathstarbench-k8s-setup profile, which provides detailed information about the profile and how deathstarbench is configured.
- You will receive two emails, one indicating the cluster is setting up and another indicating the cluster is ready. Wait for the second email before trying to use the cluster. It can take up to 15 min or longer.
- Once you receive the email, the HotelReservation application should be running within Kubernetes.
- Please follow this guide to make sure that the hotelReservation application is running in a Kubernetes cluster.
- (This should be in deathstarbench-k8s-setup-profile)
kubectl
is the command for interfacing with and managing kubernetes clusters. This website and this [one] (https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) provide a nice overview of this command. Please read through them.
Deployments allows users to configure Kubrnetes Pods and Replicas.
- List deployments:
kubectl get deployments
lists all deployed services *kubectl get deployments
lists all services deployed for HotelReservation. Such services'NAMESPACE
isdefault
. *kubectl get deployments -A
lists all services deployed for HotelReservation as well as Kubernetes services. The additional system services'NAMESPACE
iskube-system
.
- Get deployment info:
kubectl describe deployments/<name>
describes details of deployments- 'kubectl get deployment/frontend` shows resource allocation and status information for the Frontend.
Name: frontend
Namespace: default
CreationTimestamp: Tue, 06 Aug 2024 12:47:24 -0600
Labels: io.kompose.service=frontend
Annotations: deployment.kubernetes.io/revision: 1
kompose.cmd: kompose convert
kompose.version: 1.22.0 (955b78124)
Selector: io.kompose.service=frontend
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: io.kompose.service=frontend
Annotations: kompose.cmd: kompose convert
kompose.version: 1.22.0 (955b78124)
sidecar.istio.io/statsInclusionPrefixes:
cluster.outbound,cluster_manager,listener_manager,http_mixer_filter,tcp_mixer_filter,server,cluster.xds-grp,listener,connection_manager
sidecar.istio.io/statsInclusionRegexps: http.*
Containers:
hotel-reserv-frontend:
Image: deathstarbench/hotel-reservation:latest
Port: 5000/TCP
Host Port: 0/TCP
Command:
frontend
Limits:
cpu: 1
Requests:
cpu: 100m
Environment: <none>
Mounts: <none>
Volumes: <none>
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: frontend-bd4f9cf9f (1/1 replicas created)
Events: <none>
rajas@node-0:~$
- Get each service's deployment info: The directory
kubernetes
contains subdirectories for services. In each service subdirectory,<service_name>-deployment.yaml
describes details of deployment of one service. The unit of CPU underresources
ismilicore
. 1000milicore
= 1 CPU core.requests
specifies the amount of CPU guaranteed for this service andlimits
specifies the maximum allowed to allocate.
- Change configuration: In general, use
kubectl apply -h
to see a list of flags for different options for configuration updates. One example is to copy one service's deployment yaml file to home directory and modify it,cp recommendation-deployment.yaml ~
. For example, changereplicas
andrequests
. To apply the new deployment policy, usekubectl apply -f recommendation-deployment.yaml
. To check if the new service configuration is successfully rolled out, usekubectl rollout status deployments/recommendation
. To get the details of deployments, usekubectl describe deployments/recommendation
. TheEvents
part should reflect the configuration policy changes.
- Scale one service: To scale a specific service, use
kubectl scale deployments/<name> --replicas=<number>
. One example iskubectl scale deployments/frontend --replicas=2
. Then, usekubectl describe deployments/<name>
to check if the deployment details, indicated in theEvents
, reflect the scaling policy change.
- Other YAML files for a service In the subdirectory for a
service,
kubenetes/<name>
, other yaml files exist. For example,recommendation_service.yaml
specifies a service's label and ports to expose it to other services within the cluster.recommendation_pvc.yaml
defines a Persistent Volume Claim (PVC) to request storage for therecommendation
service.recommendation_persistent-volume.yaml
specifies a Persistent Volume that provides the actual storage resource to which the PVC will bind.
-
Difference between service.yaml and deployment.yaml
recommendation-service.yaml
defines network access and loadBlancing strategies for a service, specifies stable IP address, port numbers, and a well-known DNS name to route traffic to corresponding pods.recommendation-service.yaml
conceptually defines that a group of pods belong to one service and the pods are managed by policy specified by theloadBalancer
field. When the field is left blank, it means the policy is left to the cloud provider to define it.recommendation-deployment.yaml
, on the other hand, specifies deployment details including the number of replicas, images, and resource limits. -
Port numbers At the application's top directory,
/local/DeathStarBench/hotelReservation
, thedocker-compose.yaml
specifies different services being launched. It also specifies the mapping from external port numbers that are exposed publicly and internal port numbers that are inside a docker container. The external port numbers need to be unique.
- Download go wget https://go.dev/dl/go1.23.0.linux-amd64.tar.gz and place it in /usr/src
- set GOPATH, GOROOT,