title | platform | product | category | subcategory | date |
---|---|---|---|---|---|
Data Center App Performance Toolkit User Guide For Bamboo |
platform |
marketplace |
devguide |
build |
2023-02-13 |
This document walks you through the process of testing your app on Bamboo using the Data Center App Performance Toolkit. These instructions focus on producing the required performance and scale benchmarks for your Data Center app.
In this document, we cover the use of the Data Center App Performance Toolkit on Enterprise-scale environment.
Enterprise-scale environment: Bamboo Data Center environment used to generate Data Center App Performance Toolkit test results for the Marketplace approval process. Preferably, use the below recommended parameters.
- Set up an enterprise-scale environment Bamboo Data Center on AWS.
- App-specific actions development.
- Set up an execution environment for the toolkit.
- Running the test scenarios from execution environment against enterprise-scale Bamboo Data Center.
We recommend that you use the official documentation how to deploy a Bamboo Data Center environment and AWS on k8s.
Below process describes how to install Bamboo DC with an enterprise-scale dataset included. This configuration was created specifically for performance testing during the DC app review process.
- Read requirements section of the official documentation.
- Set up environment.
- Set up AWS security credentials.
{{% warning %}}
Do not use
root
user credentials for cluster creation. Instead, create an admin user. {{% /warning %}} - Clone the project repo:
git clone -b 2.3.1 https://github.com/atlassian-labs/data-center-terraform.git && cd data-center-terraform
- Copy
dcapt.tfvars
file to thedata-center-terraform
folder.wget https://raw.githubusercontent.com/atlassian/dc-app-performance-toolkit/master/app/util/k8s/dcapt.tfvars
- Set required variables in
dcapt.tfvars
file:environment_name
- any name for you environment, e.g.dcapt-bamboo
products
-bamboo
bamboo_license
- one-liner of valid bamboo license without spaces and new line symbolsregion
- Do not change default region (us-east-2
). If specific region is required, contact support.
- From local terminal (Git bash terminal for Windows) start the installation (~40min):
./install.sh -c dcapt.tfvars
- Copy product URL from the console output. Product url should look like
http://a1234-54321.us-east-2.elb.amazonaws.com/bamboo
. - Wait for all remote agents to be started and connected. It can take up to 10 minutes. Agents can be checked in
Settings
>Agents
.
{{% note %}}
New trial license could be generated on my atlassian.
Use BX02-9YO1-IN86-LO5G
Server ID for generation.
{{% /note %}}
Data dimensions and values for default enterprise-scale dataset uploaded are listed and described in the following table.
Data dimensions | Value for an enterprise-scale dataset |
---|---|
Users | 2000 |
Projects | 100 |
Plans | 2000 |
Remote agents | 50 |
See Troubleshooting tips page.
Follow steps described on Uninstallation and cleanup page.
{{% note %}} You are responsible for the cost of the AWS services running during the reference deployment. For more information, go to aws.amazon.com/pricing. {{% /note %}}
To reduce costs, we recommend you to keep your deployment up and running only during the performance runs.
Data Center App Performance Toolkit has its own set of default test actions:
- JMeter: for load at scale generation
- Selenium: for UI timings measuring
- Locust: for defined parallel Bamboo plans execution
App-specific action - action (performance test) you have to develop to cover main use cases of your application. Performance test should focus on the common usage of your application and not to cover all possible functionality of your app. For example, application setup screen or other one-time use cases are out of scope of performance testing.
If your app introduces new functionality for Bamboo entities, for example new task, it is important to extend base dataset with your app specific functionality.
-
Follow installation instructions described in bamboo dataset generator README.md
-
Open
app/util/bamboo/bamboo_dataset_generator/src/main/java/bamboogenerator/Main.java
and set:BAMBOO_SERVER_URL
: url of Bamboo stackADMIN_USER_NAME
: username of admin user (default isadmin
)
-
Login as
ADMIN_USER_NAME
, go to Profile > Personal access tokens and create a new token with the same permissions as admin user. -
Run following command:
export BAMBOO_TOKEN=newly_generarted_token # for MacOS and Linux
or
set BAMBOO_TOKEN=newly_generarted_token # for Windows
-
Open
app/util/bamboo/bamboo_dataset_generator/src/main/java/bamboogenerator/service/generator/plan/PlanGenerator.java
file and modify plan template according to your app. e.g. add new task. -
Navigate to
app/util/bamboo/bamboo_dataset_generator
and start generation:./run.sh # for MacOS and Linux
or
run # for Windows
-
Login into Bamboo UI and make sure that plan configurations were updated.
-
Default duration of the plan is 60 seconds. Measure plan duration with new app-specific functionality and modify
default_dataset_plan_duration
value accordingly inbamboo.yml
file.For example, if plan duration with app-specific task became 70 seconds, than
default_dataset_plan_duration
should be set to 70 seconds inbamboo.yml
file.
For example, you develop an app that adds some additional UI elements to view plan summary page. In this case, you should develop Selenium app-specific action:
-
Extend example of app-specific action in
dc-app-performance-toolkit/app/extension/bamboo/extension_ui.py
.
Code example. So, our test has to open plan summary page and measure time to load of this new app-specific element on the page. -
If you need to run
app_specific_action
as specific user uncommentapp_specific_user_login
function in code example. Note, that in this casetest_1_selenium_custom_action
should follow just beforetest_2_selenium_z_log_out
action. -
In
dc-app-performance-toolkit/app/selenium_ui/bamboo_ui.py
, review and uncomment the following block of code to make newly created app-specific actions executed:# def test_1_selenium_custom_action(webdriver, datasets, screen_shots): # app_specific_action(webdriver, datasets)
-
Run toolkit with
bzt bamboo.yml
command to ensure that all Selenium actions includingapp_specific_action
are successful.
-
Check that
bamboo.yml
file has correct settings ofapplication_hostname
,application_protocol
,application_port
,application_postfix
, etc. -
Set desired execution percentage for
standalone_extension
. Default value is0
, which means thatstandalone_extension
action will not be executed. For example, for app-specific action development you could set percentage ofstandalone_extension
to 100 and for all other actions to 0 - this way onlylogin_and_view_all_builds
andstandalone_extension
actions would be executed. -
Navigate to
dc-app-performance-toolkit/app
folder and run from virtualenv(as described indc-app-performance-toolkit/README.md
):python util/jmeter/start_jmeter_ui.py --app bamboo
-
Open
Bamboo
thread group >actions per login
and navigate tostandalone_extension
-
Review existing stabs of
jmeter_app_specific_action
:- example GET request
- example POST request
- example extraction of variables from the response -
app_id
andapp_token
- example assertions of GET and POST requests
-
Modify examples or add new controllers according to your app main use case.
-
Right-click on
View Results Tree
and enable this controller. -
Click Start button and make sure that
login_and_view_dashboard
andstandalone_extension
are executed. -
Right-click on
View Results Tree
and disable this controller. It is important to disableView Results Tree
controller before full-scale results generation. -
Click Save button.
-
To make
standalone_extension
executable during toolkit run editdc-app-performance-toolkit/app/bamboo.yml
and set execution percentage ofstandalone_extension
accordingly to your use case frequency. -
App-specific tests could be run (if needed) as a specific user. In the
standalone_extension
uncommentlogin_as_specific_user
controller. Navigate to theusername:password
config element and update values forapp_specific_username
andapp_specific_password
names with your specific user credentials. Also make sure that you located your app-specific tests betweenlogin_as_specific_user
andlogin_as_default_user_if_specific_user_was_loggedin
controllers. -
Run toolkit to ensure that all JMeter actions including
standalone_extension
are successful.
- Extend example of app-specific action in
dc-app-performance-toolkit/app/extension/bamboo/extension_locust.py
, so that test will call the endpoint with GET request, parse response use these data to call another endpoint with POST request and measure response time.
Code example. - In
dc-app-performance-toolkit/app/bamboo.yml
uncomment inexecution
sectionscenario: locust_app_specific
to enable locust app-specific test execution. - In
dc-app-performance-toolkit/app/bamboo.yml
setstandalone_extension_locust
to1
- app-specific action will be executed by every virtual user oflocust_app_specific
scenario. Default value is0
, which means thatstandalone_extension_locust
action will not be executed. - App-specific tests could be run (if needed) as a specific user. Use
@run_as_specific_user(username='specific_user_username', password='specific_user_password')
decorator for that. - Run toolkit with
bzt bamboo.yml
command to ensure that all Locust actions includinglocust_app_specific_action
are successful. Note, thatlocust_app_specific_action
action execution will start in some time full after ramp period up is finished (in 5-6 min).
For generating performance results suitable for Marketplace approval process use dedicated execution environment. This is a separate AWS EC2 instance to run the toolkit from. Running the toolkit from a dedicated instance but not from a local machine eliminates network fluctuations and guarantees stable CPU and memory performance.
- Go to GitHub and create a fork of dc-app-performance-toolkit.
- Clone the fork locally, then edit the
bamboo.yml
configuration file. Set enterprise-scale Bamboo Data Center parameters:
{{% warning %}}
Do not push to the fork real application_hostname
, admin_login
and admin_password
values for security reasons.
Instead, set those values directly in .yml
file on execution environment instance.
{{% /warning %}}
application_hostname: bamboo_host_name or public_ip # Bamboo DC hostname without protocol and port e.g. test-bamboo.atlassian.com or localhost
application_protocol: http # http or https
application_port: 80 # 80, 443, 8080, 8085, etc
secure: True # Set False to allow insecure connections, e.g. when using self-signed SSL certificate
application_postfix: # e.g. /babmoo in case of url like http://localhost:8085/bamboo
admin_login: admin
admin_password: admin
load_executor: jmeter
concurrency: 200 # number of concurrent threads to authenticate random users
test_duration: 45m
ramp-up: 3m
total_actions_per_hour: 2000 # number of total JMeter actions per hour
number_of_agents: 50 # number of available remote agents
parallel_plans_count: 40 # number of parallel plans execution
start_plan_timeout: 60 # maximum timeout of plan to start
default_dataset_plan_duration: 60 # expected plan execution duration
-
Push your changes to the forked repository.
-
- OS: select from Quick Start
Ubuntu Server 20.04 LTS
. - Instance type:
c5.2xlarge
- Storage size:
30
GiB
- OS: select from Quick Start
-
Connect to the instance using SSH or the AWS Systems Manager Sessions Manager.
ssh -i path_to_pem_file ubuntu@INSTANCE_PUBLIC_IP
-
Install Docker. Setup manage Docker as a non-root user.
-
Clone forked repository.
You'll need to run the toolkit for each test scenario in the next section.
4. Running the test scenarios from execution environment against enterprise-scale Bamboo Data Center
This scenario helps to identify basic performance issues.
To receive performance baseline results without an app installed and without app-specific actions (use code from master
branch):
-
Use SSH to connect to execution environment.
-
Run toolkit with docker from the execution environment instance:
cd dc-app-performance-toolkit docker pull atlassian/dcapt docker run --shm-size=4g -v "$PWD:/dc-app-performance-toolkit" atlassian/dcapt bamboo.yml
-
View the following main results of the run in the
dc-app-performance-toolkit/app/results/bamboo/YY-MM-DD-hh-mm-ss
folder:results_summary.log
: detailed run summaryresults.csv
: aggregated .csv file with all actions and timingsbzt.log
: logs of the Taurus tool executionjmeter.*
: logs of the JMeter tool executionlocust.*
: logs of the Locust tool execution
{{% note %}}
Review results_summary.log
file under artifacts dir location. Make sure that overall status is OK
before moving to
the next steps. For an enterprise-scale environment run, the acceptable success rate for actions is 95% and above.
{{% /note %}}
Performance results generation with the app installed (still use master branch):
-
Run toolkit with docker from the execution environment instance:
cd dc-app-performance-toolkit docker pull atlassian/dcapt docker run --shm-size=4g -v "$PWD:/dc-app-performance-toolkit" atlassian/dcapt bamboo.yml
{{% note %}}
Review results_summary.log
file under artifacts dir location. Make sure that overall status is OK
before moving to
the next steps. For an enterprise-scale environment run, the acceptable success rate for actions is 95% and above.
{{% /note %}}
To receive scalability benchmark results for one-node Bamboo DC with app and with app-specific actions:
-
Apply app-specific code changes to a new branch of forked repo.
-
Use SSH to connect to execution environment.
-
Pull cloned fork repo branch with app-specific actions.
-
Run toolkit with docker from the execution environment instance:
cd dc-app-performance-toolkit docker pull atlassian/dcapt docker run --shm-size=4g -v "$PWD:/dc-app-performance-toolkit" atlassian/dcapt bamboo.yml
{{% note %}}
Review results_summary.log
file under artifacts dir location. Make sure that overall status is OK
before moving to
the next steps. For an enterprise-scale environment run, the acceptable success rate for actions is 95% and above.
{{% /note %}}
To generate a performance regression report:
- Use SSH to connect to execution environment.
- Install and activate the
virtualenv
as described indc-app-performance-toolkit/README.md
- Allow current user (for execution environment default user is
ubuntu
) to access Docker generated reports:sudo chown -R ubuntu:ubuntu /home/ubuntu/dc-app-performance-toolkit/app/results
- Navigate to the
dc-app-performance-toolkit/app/reports_generation
folder. - Edit the
bamboo_profile.yml
file:- Under
runName: "without app"
, in thefullPath
key, insert the full path to results directory of Run 1. - Under
runName: "with app"
, in thefullPath
key, insert the full path to results directory of Run 2. - Under
runName: "with app and app-specific actions"
, in thefullPath
key, insert the full path to results directory of Run 3.
- Under
- Run the following command:
python csv_chart_generator.py bamboo_profile.yml
- In the
dc-app-performance-toolkit/app/results/reports/YY-MM-DD-hh-mm-ss
folder, view the.csv
file (with consolidated scenario results), the.png
chart file and bamboo performance scenario summary report.
Use scp command to copy report artifacts from execution env to local drive:
-
From local machine terminal (Git bash terminal for Windows) run command:
export EXEC_ENV_PUBLIC_IP=execution_environment_ec2_instance_public_ip scp -r -i path_to_exec_env_pem ubuntu@$EXEC_ENV_PUBLIC_IP:/home/ubuntu/dc-app-performance-toolkit/app/results/reports ./reports
-
Once completed, in the
./reports
folder you will be able to review the action timings with and without your app to see its impact on the performance of the instance. If you see an impact (>20%) on any action timing, we recommend taking a look into the app implementation to understand the root cause of this delta.
{{% warning %}} It is recommended to terminate an enterprise-scale environment after completing all tests. Follow Uninstallation and Cleanup instructions. {{% /warning %}}
{{% warning %}} Do not forget to attach performance testing results to your ECOHELP ticket. {{% /warning %}}
- Make sure you have report folder with bamboo performance scenario results.
Folder should have
profile.csv
,profile.png
,profile_summary.log
and profile run result archives. Archives should contain all raw data created during the run:bzt.log
, selenium/jmeter/locust logs, .csv and .yml files, etc. - Attach report folder to your ECOHELP ticket.
In case of technical questions, issues or problems with DC Apps Performance Toolkit, contact us for support in the community Slack #data-center-app-performance-toolkit channel.