Skip to content

Commit

Permalink
Modify the simulation shell (stolostron#855)
Browse files Browse the repository at this point in the history
* modify the simultation shell

Signed-off-by: Meng Yan <[email protected]>

* modify the simultation shell

Signed-off-by: Meng Yan <[email protected]>

* start

Signed-off-by: myan <[email protected]>

* with number

Signed-off-by: myan <[email protected]>

* with number

Signed-off-by: myan <[email protected]>

* with range

Signed-off-by: myan <[email protected]>

* with range

Signed-off-by: myan <[email protected]>

* fix

Signed-off-by: myan <[email protected]>

* di

Signed-off-by: myan <[email protected]>

* con

Signed-off-by: myan <[email protected]>

* con

Signed-off-by: myan <[email protected]>

* con

Signed-off-by: myan <[email protected]>

* con

Signed-off-by: myan <[email protected]>

* d

Signed-off-by: myan <[email protected]>

* d

Signed-off-by: myan <[email protected]>

* d

Signed-off-by: myan <[email protected]>

* d

Signed-off-by: myan <[email protected]>

* remove con

Signed-off-by: myan <[email protected]>

* route policy

Signed-off-by: myan <[email protected]>

* update readme

Signed-off-by: myan <[email protected]>

* shell

Signed-off-by: myan <[email protected]>

* batch update

Signed-off-by: myan <[email protected]>

* batch update

Signed-off-by: myan <[email protected]>

* d

Signed-off-by: myan <[email protected]>

* complianc

Signed-off-by: myan <[email protected]>

* d

Signed-off-by: myan <[email protected]>

* shuffle event name

Signed-off-by: myan <[email protected]>

* shell

Signed-off-by: myan <[email protected]>

* f

Signed-off-by: myan <[email protected]>

---------

Signed-off-by: Meng Yan <[email protected]>
Signed-off-by: myan <[email protected]>
  • Loading branch information
yanmxa authored Apr 7, 2024
1 parent e05ad0b commit 02d6159
Show file tree
Hide file tree
Showing 15 changed files with 249 additions and 168 deletions.
9 changes: 4 additions & 5 deletions doc/simulation/inspector/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,14 +23,12 @@ The inspector is inspired by [acm-inspector](https://github.com/bjoydeep/acm-ins
selector:
name: multicluster-global-hub-postgres
type: LoadBalancer
status:
loadBalancer: {}
EOF
```

3. The `python3` and the tool `pip3` have been installed on your environment.
3. The `python` and the tool `pip` have been installed on your environment.
4. Enable the `Prometheus` on your global hub.
5. Running the `pip3 install -r ./doc/simulation/inspector/requirements.txt` to install dependencies.
5. Running the `pip install -r ./doc/simulation/inspector/requirements.txt` to install dependencies.

## Running the inspector

Expand Down Expand Up @@ -59,10 +57,11 @@ The inspector is inspired by [acm-inspector](https://github.com/bjoydeep/acm-ins
```

- Stop the backend process

```bash
./doc/simulation/inspector/cmd/counter.sh stop
```

### Get CPU and Memory information

```bash
Expand Down
2 changes: 1 addition & 1 deletion doc/simulation/inspector/cmd/check.sh
Original file line number Diff line number Diff line change
Expand Up @@ -7,4 +7,4 @@ REPO_DIR="$(cd "$(dirname ${BASH_SOURCE[0]})/.." ; pwd -P)"
output=${REPO_DIR}/output
mkdir -p ${output}

python3 ${REPO_DIR}/src/entry.py "$1" "$2"
python ${REPO_DIR}/src/entry.py "$1" "$2"
2 changes: 1 addition & 1 deletion doc/simulation/inspector/cmd/check_agent.sh
Original file line number Diff line number Diff line change
Expand Up @@ -7,4 +7,4 @@ REPO_DIR="$(cd "$(dirname ${BASH_SOURCE[0]})/.." ; pwd -P)"
output=${REPO_DIR}/output
mkdir -p ${output}

python3 ${REPO_DIR}/src/agent.py "$1" "$2"
python ${REPO_DIR}/src/agent.py "$1" "$2"
8 changes: 4 additions & 4 deletions doc/simulation/inspector/cmd/counter.sh
Original file line number Diff line number Diff line change
Expand Up @@ -10,27 +10,27 @@ mkdir -p ${output}
# Function to start the backend application
start_backend() {
echo "Starting the backend counter..."
python3 ${REPO_DIR}/src/counter.py override 2>&1 > ${output}/counter.log &
python ${REPO_DIR}/src/counter.py override 2>&1 > ${output}/counter.log &
}

# Function to start the backend application
continue_backend() {
echo "Continue the backend counter..."
pkill -f ${REPO_DIR}/src/counter.py
python3 ${REPO_DIR}/src/counter.py 2>&1 >> ${output}/counter.log &
python ${REPO_DIR}/src/counter.py 2>&1 >> ${output}/counter.log &
}

# Function to stop the backend application
stop_backend() {
echo "Stopping the backend counter..."
# Replace the following line with the actual command or process name to stop your backend app
pkill -f ${REPO_DIR}/src/counter.py
python3 ${REPO_DIR}/src/counter.py draw
python ${REPO_DIR}/src/counter.py draw
}

csv_draw() {
echo "Drawing from the csv..."
python3 ${REPO_DIR}/src/counter.py draw
python ${REPO_DIR}/src/counter.py draw
}

# Check if an argument is provided (start or stop)
Expand Down
10 changes: 6 additions & 4 deletions doc/simulation/inspector/src/counter.py
Original file line number Diff line number Diff line change
Expand Up @@ -61,10 +61,11 @@ def record_initial(override):
while True:
try:
cur.execute(initial_sql)
except ValueError:
print("Invalid operation.", ValueError)
except Exception as e:
print("Error executing SQL query:", e)
connection = get_conn()
cur = connection.cursor()
continue

df = pd.DataFrame([
{'time': datetime.now(pytz.utc).strftime("%Y-%m-%d %H:%M:%S"), 'compliance': 0, 'event': 0, 'cluster': 0},
Expand Down Expand Up @@ -94,10 +95,11 @@ def record_compliance(override):
while True:
try:
cur.execute(compliance_sql)
except ValueError:
print("Invalid operation.", ValueError)
except Exception as e:
print("Error executing SQL query:", e)
connection = get_conn()
cur = connection.cursor()
continue
# cur.execute(compliance_sql)
df = pd.DataFrame([
{'time': datetime.now(pytz.utc).strftime("%Y-%m-%d %H:%M:%S"), 'compliant': 0, 'non_compliant': 0},
Expand Down
7 changes: 5 additions & 2 deletions doc/simulation/local-policies/policy.sh
Original file line number Diff line number Diff line change
Expand Up @@ -121,8 +121,9 @@ kubectl patch policy $root_policy_namespace.$root_plicy_name -n $cluster_name --
}

function generate_placement() {
placement_namespace=$1
placement_name=$2
placement_namespace="$1"
placement_name="$2"
decsion="$3"

cat <<EOF | kubectl apply -f -
apiVersion: cluster.open-cluster-management.io/v1beta1
Expand Down Expand Up @@ -155,4 +156,6 @@ metadata:
name: $placement_name
uid: $palcementId
EOF

kubectl patch placementdecision $placement_name-1 -n $placement_namespace --type=merge --subresource status --patch "status: {decisions: [${decision}]}"
}
73 changes: 73 additions & 0 deletions doc/simulation/local-policies/rotate-policy.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
#!/bin/bash
# Copyright (c) 2023 Red Hat, Inc.
# Copyright Contributors to the Open Cluster Management project

set -eo pipefail

CURRENT_DIR=$(cd "$(dirname "$0")" || exit;pwd)

# Check if the script is provided with the correct number of positional parameters
if [ $# -lt 2 ]; then
echo "Usage: $0 <policy_start:policy_end> <Compliant/NonCompliant> <KUBECONFIG>"
exit 1
fi

# Parse the parameter using the delimiter ":"
IFS=':' read -r policy_start policy_end <<< "$1"
compliance_state=$2
export KUBECONFIG=$3
concurrent="${4:-1}"

sorted_clusters=$(kubectl get mcl | grep -oE 'managedcluster-[0-9]+' | awk -F"-" '{print $2}' | sort -n)
cluster_start=$(echo "$sorted_clusters" | head -n 1)
cluster_end=$(echo "$sorted_clusters" | tail -n 1)

echo ">> KUBECONFIG=$KUBECONFIG"
echo ">> Rotating Policy $policy_start~$policy_end to $compliance_state on cluster $cluster_start~$cluster_end"

random_number=$(shuf -i 10000-99999 -n 1)

function update_cluster_policies() {
root_policy_namespace=default
root_policy_name=$1
root_policy_status=$2

echo ">> Rotating $root_policy_name to $root_policy_status on cluster $cluster_start~$cluster_end"

count=0
# path replicas policy: rootpolicy namespace, name and managed cluster
for j in $(seq $cluster_start $cluster_end); do

cluster_name=managedcluster-${j}
event_name=$root_policy_namespace.$root_policy_name.$cluster_name.$random_number
if [[ $root_policy_status == "Compliant" ]]; then
# patch replicas policy status to compliant
kubectl patch policy $root_policy_namespace.$root_policy_name -n $cluster_name --type=merge --subresource status --patch "status: {compliant: Compliant, details: [{compliant: Compliant, history: [{eventName: $event_name, message: Compliant; notification - limitranges container-mem-limit-range found as specified in namespace $root_policy_namespace}], templateMeta: {creationTimestamp: null, name: policy-limitrange-container-mem-limit-range}}]}" &
else
kubectl patch policy $root_policy_namespace.$root_policy_name -n $cluster_name --type=merge --subresource status --patch "status: {compliant: NonCompliant, details: [{compliant: NonCompliant, history: [{eventName: $event_name, message: NonCompliant; violation - limitranges container-mem-limit-range not found in namespace $root_policy_namespace}], templateMeta: {creationTimestamp: null, name: policy-limitrange-container-mem-limit-range}}]}" &
fi

if [ $j == 1 ];then
status="{clustername: managedcluster-${j}, clusternamespace: managedcluster-${j}, compliant: $root_policy_status}"
else
status="${status}, {clustername: managedcluster-${j}, clusternamespace: managedcluster-${j}, compliant: $root_policy_status}"
fi

count=$(( $count + 1 ))
if (( count == concurrent )); then
wait
count=0
fi
done

wait

# patch root policy status
kubectl patch policy $root_policy_name -n $root_policy_namespace --type=merge --subresource status --patch "status: {compliant: $root_policy_status, placement: [{placement: placement-$root_policy_name, placementBinding: binding-${root_policy_name}}], status: [${status}]}"
}

for i in $(seq $policy_start $policy_end)
do
# path replicas policy: rootpolicy namespace, name and managed cluster
update_cluster_policies "rootpolicy-${i}" $compliance_state
done
72 changes: 51 additions & 21 deletions doc/simulation/local-policies/setup-policy.sh
Original file line number Diff line number Diff line change
Expand Up @@ -2,53 +2,83 @@
# Copyright (c) 2023 Red Hat, Inc.
# Copyright Contributors to the Open Cluster Management project

set -eo pipefail

set -o pipefail

### This script is used to setup policy and placement for testing
### Usage: ./setup-policy.sh <root-policy-number> <replicas-number/cluster-number> [kubeconfig]
### Usage: ./setup-policy.sh <root-policy-number> [kubeconfig]
if [ $# -ne 2 ]; then
echo "Usage: $0 <policy_start:policy_end> <KUBECONFIG>"
exit 1
fi

IFS=':' read -r policy_start policy_end <<< "$1"
KUBECONFIG=$2

echo ">> Generate policy ${policy_start}~${policy_end} on $KUBECONFIG"

REPO_DIR="$(cd "$(dirname ${BASH_SOURCE[0]})/../../.." ; pwd -P)"
CURRENT_DIR=$(cd "$(dirname "$0")" || exit;pwd)
KUBECONFIG=$3
FROM_POLICY_IDX=${FROM_POLICY_IDX:-1}
kubectl apply -f $REPO_DIR/pkg/testdata/crds/0000_00_policy.open-cluster-management.io_policies.crd.yaml
kubectl apply -f $REPO_DIR/pkg/testdata/crds/0000_00_cluster.open-cluster-management.io_placements.crd.yaml
kubectl apply -f $REPO_DIR/pkg/testdata/crds/0000_03_clusters.open-cluster-management.io_placementdecisions.crd.yaml

source ${CURRENT_DIR}/policy.sh

function generate_replicas_policy() {
rootpolicy_name=$1
cluster_num=$2
cluster_start=$2
cluster_end=$3

echo ">> Policy ${rootpolicy_name} is propagating to clusters $cluster_start~$cluster_end on $KUBECONFIG"

# create root policy
limit_range_policy $rootpolicy_name &

# create replicas policy: rootpolicy namespace, name and managed cluster
for j in $(seq 1 $cluster_num); do
echo "Generating managedcluster-${j}/${rootpolicy_name} on $KUBECONFIG"
for j in $(seq $cluster_start $cluster_end); do
cluster_name=managedcluster-${j}
echo ">> Generating policy ${cluster_name}/${rootpolicy_name} on $KUBECONFIG"

limit_range_replicas_policy default $rootpolicy_name ${cluster_name}

limit_range_replicas_policy default $rootpolicy_name managedcluster-${j}
if [ $j == 1 ]; then
status="{clustername: managedcluster-${j}, clusternamespace: managedcluster-${j}, compliant: NonCompliant}"
decision="{clusterName: managedcluster-${j}, reason: ''}"
status="{clustername: $cluster_name, clusternamespace: $cluster_name, compliant: NonCompliant}"
decision="{clusterName: $cluster_name, reason: ''}"
else
status="${status}, {clustername: managedcluster-${j}, clusternamespace: managedcluster-${j}, compliant: NonCompliant}"
decision="${decision}, {clusterName: managedcluster-${j}, reason: ''}"
status="${status}, {clustername: $cluster_name, clusternamespace: $cluster_name, compliant: NonCompliant}"
decision="${decision}, {clusterName: $cluster_name, reason: ''}"
fi

done

wait

# patch root policy status
kubectl patch policy $rootpolicy_name -n default --type=merge --subresource status --patch "status: {compliant: NonCompliant, placement: [{placement: placement-roopolicy-${i}, placementBinding: binding-roopolicy-${i}}], status: [${status}]}" &
kubectl patch policy $rootpolicy_name -n default --type=merge --subresource status --patch "status: {compliant: NonCompliant, placement: [{placement: placement-$rootpolicy_name, placementBinding: binding-$rootpolicy_name}], status: [${status}]}" &

# generate placement and placementdecision, each rootpolicy with a placement and placementdescision
generate_placement default placement-$rootpolicy_name &
# patch placementdecision status
kubectl patch placementdecision placement-${rootpolicy_name}-1 -n default --type=merge --subresource status --patch "status: {decisions: [${decision}]}" &

generate_placement default placement-$rootpolicy_name "$decision" &

wait

echo "Rootpolicy ${rootpolicy_name} propagate to $cluster_num clusters on $KUBECONFIG"
echo ">> Policy ${rootpolicy_name} is propagated to clusters $cluster_start~$cluster_end on $KUBECONFIG"
}

for i in $(seq $FROM_POLICY_IDX $1); do
sorted_clusters=$(kubectl get mcl | grep -oE 'managedcluster-[0-9]+' | awk -F"-" '{print $2}' | sort -n)
cluster_start=$(echo "$sorted_clusters" | head -n 1)
cluster_end=$(echo "$sorted_clusters" | tail -n 1)


sorted_policies=$(kubectl get policy -n default | grep 'NonCompliant' | grep -oE 'rootpolicy-[0-9]+' | awk -F"-" '{print $2}' | sort -n)
policy_last=$(echo "$sorted_policies" | tail -n 1)

if [ -n "$policy_last" ] && [ "$policy_last" -gt 0 ]; then
policy_start=$((policy_last + 1))
echo ">> policy_start reset to $((policy_last + 1)) for KUBECONFIG=$KUBECONFIG"
fi

for i in $(seq ${policy_start} ${policy_end}); do
policy_name="rootpolicy-${i}"
# create replicas policy: name and managed cluster
generate_replicas_policy rootpolicy-${i} $2
generate_replicas_policy $policy_name $cluster_start $cluster_end
done
6 changes: 6 additions & 0 deletions doc/simulation/managed-clusters/setup-cluster.sh
Original file line number Diff line number Diff line change
Expand Up @@ -7,12 +7,18 @@

set -eo pipefail

CURRENT_DIR=$(cd "$(dirname "$0")" || exit;pwd)
REPO_DIR="$(cd "$(dirname ${BASH_SOURCE[0]})/../../.." ; pwd -P)"

if [ $# -lt 2 ]; then
cluster_id_prefix="1" # Set a default value of "1" if $2 is not provided
else
cluster_id_prefix="$2" # Use the provided value of $2
fi

# create the mcl crd
kubectl apply -f $REPO_DIR/pkg/testdata/crds/0000_00_cluster.open-cluster-management.io_managedclusters.crd.yaml

# creating the simulated managedcluster
for i in $(seq 1 $1)
do
Expand Down
34 changes: 16 additions & 18 deletions doc/simulation/setup/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,28 +5,26 @@
You can execute the following script to create the hub clusters and join them into the global hub. To join these clusters to it, You must set the `KUBECONFIG` environment variable to enable these hubs can connect to the global hub. Besides, you also need to provide several parameters:

```bash
./doc/simulation/setup/setup-cluster.sh 2 2000
./doc/simulation/setup/setup-cluster.sh 1:5 1:300
```

- `$1` - How many managed hub clusters will be created
- `$2` - How many managed cluster will be created on per managed hub
- `$3` - Which managed cluster to start on per managed hub, default value is `1`
- `$1` - <hub_start:hub_end> - Managed hubs, from `hub1` to `hub5`
- `$2` - <cluster_start:cluster_end> - Managed clusters on each hub, from `managedcluter-1` to `managedcluster-300`

That means create `5` managed hubs and each has `300` managed clusters. You can also run `./doc/simulation/managed-clusters/cleanup-cluster.sh 300` on each hub cluster to cleanup the generated managed clusters.
That means create `5` managed hubs and each has `300` managed clusters.

## Create the policies on the managed hub clusters

Running the following script to create the policies on all the managed hubs.

```bash
./doc/simulation/setup/setup-policy.sh 5 50 300
./doc/simulation/setup/setup-policy.sh 1:5 1:50
```

- `$1` - How many managed hub clusters to mock the polices
- `$2` - How many root policy will be created per managed hub cluster
- `$3` - How many managed cluster the root policy will be propagated to on each hub cluster
- `$1` - <hub_start:hub_end> - Managed hubs, from `hub1` to `hub5`
- `$2` - <policy_start:policy_end> - Policies on each hub, from `rootpoicy-1` to `rootpolicy-50`

That means the operation will run on the `5` managed hub concurrently. Each of them will create `50` root policies and propagate to the `300` managed clusters. So there will be `15000` replicas polices on the managed hub cluster. Likewise, you can execute `./doc/simulation/local-policies/cleanup-policy.sh 50 300` on each managed hub to delete the created polices.
That means the operation will run on the `5` managed hub concurrently. Each of them will create `50` root policies and propagate to the `300` managed clusters. So there will be `15000` replicas polices on the managed hub cluster.

## The Scale for Global Hub Test

Expand All @@ -49,16 +47,16 @@ kubectl label mcl hub4 vendor=OpenShift --overwrite
kubectl label mcl hub5 vendor=OpenShift --overwrite
```

## Rotate the Status of Polcies
## Rotate the Status of policy

You can run the following script to update the replicas policies status on each hub cluster.

```bash
# update the 50 root policy on the 300 cluster, and update the status to Compliant(default NonCompliant)
$ ./doc/simulation/setup/rotate-policy.sh 50 300 "Compliant"
# $ ./doc/simulation/setup/rotate-policy.sh 50 300 "NonCompliant"
# update the 1 ~ 50 root policy on all the clusters, and update the status to Compliant(default NonCompliant)
$ ./doc/simulation/setup/rotate-policy.sh 1:5 1:50 "Compliant"
# ./doc/simulation/setup/rotate-policy.sh 1:5 1:50 "NonCompliant"
```
- `$1` - How many root policy status will route on per managed hub cluster
- `$2` - How many managed clusters will this `$1` poclies will rotate
- `$3` - The target compliance status
- `$4` - Optional: Specify how many processes can be executed concurrently

- `$1` - <hub_start:hub_end> - Managed hubs, from `hub1` to `hub5`
- `$2` - <policy_start:policy_end> - Policies on each hub, from `rootpoicy-1` to `rootpolicy-50`
- `$2` - The target compliance status
Loading

0 comments on commit 02d6159

Please sign in to comment.