- Introduction
- Getting ready with Azure
- Automated Installation
- Manual Installation
- Setting up the Azure environment
- Create a resource group
- Create a Container Apps environment
- Create the managed Postgres databases
- Create the managed MariaDB database
- Create the managed MongoDB database
- Create the managed Kafka
- Create the Schema Registry
- Create the Azure OpenAI resources
- Deploying the applications
- Load Testing
- Miscellaneous
- References
Azure Container Apps allows to run containerized applications without worrying about orchestration or infrastructure (i.e. we don't have to directly use K8s, it's used under the hood). This guide goes through setting up the Azure environment, the required services, and deploying the Super Hero application. Some of these services (i.e. databases/Kafka/etc) have been replaced with managed Azure services. This diagram shows the overall architecture:
First of all, you need an Azure subscription. If you don't have one, go to https://signup.azure.com and register. Also make sure you have Azure CLI installed on your machine, as well as curl and jq.
Once everything is installed, sign in to Azure from the CLI:
az login
The entire system can be deployed in an automated fashion by running the deploy-to-azure-containerapps.sh
script.
This script can be downloaded and run outside of this repo as well.
Run deploy-to-azure-containerapps.sh -h
for information and configuration options.
You only have to setup this once. Install the Azure Container Apps and Database extensions for the Azure CLI:
az extension add --name containerapp
az extension add --name rdbms-connect
az extension add --name log-analytics
Register the Microsoft.App namespace
az provider register --namespace Microsoft.App --wait
Register the Microsoft.OperationalInsights provider
az provider register --namespace Microsoft.OperationalInsights --wait
The Super Heroes environment and application will be created and deployed using a set of Azure CLI commands. For that, we need to set the following environment variables:
# Images
SUPERHEROES_IMAGES_BASE="quay.io/quarkus-super-heroes"
IMAGES_TAG="java17-latest"
# Azure
RESOURCE_GROUP="super-heroes"
LOCATION="eastus2"
# Need this because some Azure services need to have unique names in a region
UNIQUE_IDENTIFIER=$(whoami)
TAG_SYSTEM=quarkus-super-heroes
# Container Apps
LOG_ANALYTICS_WORKSPACE="super-heroes-logs"
CONTAINERAPPS_ENVIRONMENT="super-heroes-env"
# Postgres
POSTGRES_DB_ADMIN="superheroesadmin"
POSTGRES_DB_PWD="p#ssw0rd-12046"
POSTGRES_DB_VERSION="14"
POSTGRES_SKU="B1ms"
POSTGRES_TIER="Burstable"
# MariaDB
MARIADB_ADMIN_USER="$POSTGRES_DB_ADMIN"
MARIADB_ADMIN_PWD="$POSTGRES_DB_PWD"
MARIADB_SKU="$POSTGRES_SKU"
MARIADB_TIER="$POSTGRES_TIER"
MARIADB_VERSION="5.7"
# MongoDB
MONGO_DB="fights-db-$UNIQUE_IDENTIFIER"
MONGO_DB_VERSION="4.2"
# Kafka
KAFKA_NAMESPACE="fights-kafka-$UNIQUE_IDENTIFIER"
KAFKA_TOPIC="fights"
KAFKA_BOOTSTRAP_SERVERS="$KAFKA_NAMESPACE.servicebus.windows.net:9093"
# Apicurio
APICURIO_APP="apicurio"
APICURIO_IMAGE="apicurio/apicurio-registry-mem:2.4.2.Final"
# Narration
NARRATION_APP="rest-narration"
NARRATION_IMAGE="${SUPERHEROES_IMAGES_BASE}/${NARRATION_APP}:${IMAGES_TAG}"
# Location
LOCATIONS_APP="grpc-locations"
LOCATIONS_DB="locations-db-$UNIQUE_IDENTIFIER"
LOCATIONS_IMAGE="${SUPERHEROES_IMAGES_BASE}/${LOCATIONS_APP}:${IMAGES_TAG}"
LOCATIONS_DB_SCHEMA="locations"
LOCATIONS_DB_CONNECT_STRING="jdbc:mariadb://${LOCATIONS_DB}.mysql.database.azure.com:3306/${LOCATIONS_DB_SCHEMA}?sslmode=trust&useMysqlMetadata=true"
# Heroes
HEROES_APP="rest-heroes"
HEROES_DB="heroes-db-$UNIQUE_IDENTIFIER"
HEROES_IMAGE="${SUPERHEROES_IMAGES_BASE}/${HEROES_APP}:${IMAGES_TAG}"
HEROES_DB_SCHEMA="heroes"
HEROES_DB_CONNECT_STRING="postgresql://${HEROES_DB}.postgres.database.azure.com:5432/${HEROES_DB_SCHEMA}?ssl=true&sslmode=require"
# Villains
VILLAINS_APP="rest-villains"
VILLAINS_DB="villains-db-$UNIQUE_IDENTIFIER"
VILLAINS_IMAGE="${SUPERHEROES_IMAGES_BASE}/${VILLAINS_APP}:${IMAGES_TAG}"
VILLAINS_DB_SCHEMA="villains"
VILLAINS_DB_CONNECT_STRING="jdbc:postgresql://${VILLAINS_DB}.postgres.database.azure.com:5432/${VILLAINS_DB_SCHEMA}?ssl=true&sslmode=require"
# Fights
FIGHTS_APP="rest-fights"
FIGHTS_DB_SCHEMA="fights"
FIGHTS_IMAGE="${SUPERHEROES_IMAGES_BASE}/${FIGHTS_APP}:${IMAGES_TAG}"
# Statistics
STATISTICS_APP="event-statistics"
STATISTICS_IMAGE="${SUPERHEROES_IMAGES_BASE}/${STATISTICS_APP}:${IMAGES_TAG}"
# UI
UI_APP="ui-super-heroes"
UI_IMAGE="${SUPERHEROES_IMAGES_BASE}/${UI_APP}:${IMAGES_TAG}"
# Cognitive Services
COGNITIVE_SERVICE="cs-super-heroes-$UNIQUE_IDENTIFIER"
COGNITIVE_DEPLOYMENT="csdeploy-super-heroes-$UNIQUE_IDENTIFIER"
MODEL="gpt-35-turbo"
MODEL_VERSION="0613"
All the resources in Azure have to belong to a resource group. Execute the following command to create a resource group:
az group create \
--name "$RESOURCE_GROUP" \
--location "$LOCATION" \
--tags system="$TAG_SYSTEM"
Create a Log Analytics workspace:
az monitor log-analytics workspace create \
--resource-group "$RESOURCE_GROUP" \
--location "$LOCATION" \
--tags system="$TAG_SYSTEM" \
--workspace-name "$LOG_ANALYTICS_WORKSPACE"
Retrieve the Log Analytics Client ID and client secret:
LOG_ANALYTICS_WORKSPACE_CLIENT_ID=`az monitor log-analytics workspace show \
--resource-group "$RESOURCE_GROUP" \
--workspace-name "$LOG_ANALYTICS_WORKSPACE" \
--query customerId \
--output tsv | tr -d '[:space:]'`
echo $LOG_ANALYTICS_WORKSPACE_CLIENT_ID
LOG_ANALYTICS_WORKSPACE_CLIENT_SECRET=`az monitor log-analytics workspace get-shared-keys \
--resource-group "$RESOURCE_GROUP" \
--workspace-name "$LOG_ANALYTICS_WORKSPACE" \
--query primarySharedKey \
--output tsv | tr -d '[:space:]'`
echo $LOG_ANALYTICS_WORKSPACE_CLIENT_SECRET
A container apps environment acts as a boundary for our containers. Containers deployed on the same environment use the same virtual network and the same Log Analytics workspace. Create the container apps environment with the following command:
az containerapp env create \
--resource-group "$RESOURCE_GROUP" \
--location "$LOCATION" \
--tags system="$TAG_SYSTEM" \
--name "$CONTAINERAPPS_ENVIRONMENT" \
--logs-workspace-id "$LOG_ANALYTICS_WORKSPACE_CLIENT_ID" \
--logs-workspace-key "$LOG_ANALYTICS_WORKSPACE_CLIENT_SECRET"
We need to create two PostgreSQL databases so the Heroes and Villains microservice can store data.
Because we also want to access these database from external SQL client, we make them available to the outside world thanks to the -public all
parameter.
Create the two databases with the following commands:
az postgres flexible-server create \
--resource-group "$RESOURCE_GROUP" \
--location "$LOCATION" \
--tags system="$TAG_SYSTEM" application="$HEROES_APP" \
--name "$HEROES_DB" \
--admin-user "$POSTGRES_DB_ADMIN" \
--admin-password "$POSTGRES_DB_PWD" \
--public all \
--sku-name "Standard_$POSTGRES_SKU" \
--tier "$POSTGRES_TIER" \
--storage-size 32 \
--version "$POSTGRES_DB_VERSION"
az postgres flexible-server create \
--resource-group "$RESOURCE_GROUP" \
--location "$LOCATION" \
--tags system="$TAG_SYSTEM" application="$VILLAINS_APP" \
--name $VILLAINS_DB \
--admin-user "$POSTGRES_DB_ADMIN" \
--admin-password "$POSTGRES_DB_PWD" \
--public all \
--sku-name "Standard_$POSTGRES_SKU" \
--tier "$POSTGRES_TIER" \
--storage-size 32 \
--version "$POSTGRES_DB_VERSION"
Then, we create two database schemas, one for Heroes, another one for Villains
az postgres flexible-server db create \
--resource-group "$RESOURCE_GROUP" \
--server-name "$HEROES_DB" \
--database-name "$HEROES_DB_SCHEMA"
az postgres flexible-server db create \
--resource-group "$RESOURCE_GROUP" \
--server-name "$VILLAINS_DB" \
--database-name "$VILLAINS_DB_SCHEMA"
Add data to both databases using the following commands:
az postgres flexible-server execute \
--name "$HEROES_DB" \
--admin-user "$POSTGRES_DB_ADMIN" \
--admin-password "$POSTGRES_DB_PWD" \
--database-name "$HEROES_DB_SCHEMA" \
--file-path "rest-heroes/deploy/db-init/initialize-tables.sql"
az postgres flexible-server execute \
--name "$VILLAINS_DB" \
--admin-user "$POSTGRES_DB_ADMIN" \
--admin-password "$POSTGRES_DB_PWD" \
--database-name "$VILLAINS_DB_SCHEMA" \
--file-path "rest-villains/deploy/db-init/initialize-tables.sql"
You can check the content of the tables with the following commands:
az postgres flexible-server execute \
--name "$HEROES_DB" \
--admin-user "$POSTGRES_DB_ADMIN" \
--admin-password "$POSTGRES_DB_PWD" \
--database-name "$HEROES_DB_SCHEMA" \
--querytext "select * from hero"
az postgres flexible-server execute \
--name "$VILLAINS_DB" \
--admin-user "$POSTGRES_DB_ADMIN" \
--admin-password "$POSTGRES_DB_PWD" \
--database-name "$VILLAINS_DB_SCHEMA" \
--querytext "select * from villain"
If you'd like to see the connection strings to the databases (so you can access your database from an external SQL client), use the following commands:
az postgres flexible-server show-connection-string \
--database-name "$HEROES_DB_SCHEMA" \
--server-name "$HEROES_DB" \
--admin-user "$POSTGRES_DB_ADMIN" \
--admin-password "$POSTGRES_DB_PWD" \
--query "connectionStrings.jdbc" \
--output tsv
az postgres flexible-server show-connection-string \
--database-name "$VILLAINS_DB_SCHEMA" \
--server-name "$VILLAINS_DB" \
--admin-user "$POSTGRES_DB_ADMIN" \
--admin-password "$POSTGRES_DB_PWD" \
--query "connectionStrings.jdbc" \
--output tsv
Important
These aren't the actual connection strings used, especially in the heroes service, which does not use JDBC.
You also need to append ssl=true&sslmode=require
to the end of each connect string to force the driver to use ssl.
These commands are just here for your own examination purposes.
We need to create a single MariaDB database so the Location microservice can store data.
Because we also want to access these database from external SQL client, we make them available to the outside world thanks to the -public all
parameter.
Create the database with the following command:
az mysql flexible-server create \
--resource-group "$RESOURCE_GROUP" \
--location "$LOCATION" \
--tags system="$TAG_SYSTEM" application="$LOCATIONS_APP" \
--name "$LOCATIONS_DB" \
--admin-user "$MARIADB_ADMIN_USER" \
--admin-password "$MARIADB_ADMIN_PWD" \
--public all \
--sku-name "Standard_$MARIADB_SKU" \
--tier "$MARIADB_TIER" \
--storage-size 32 \
--version "$MARIADB_VERSION"
Then, we create the database schema:
az mysql flexible-server db create \
--resource-group "$RESOURCE_GROUP" \
--server-name "$LOCATIONS_DB" \
--database-name "$LOCATIONS_DB_SCHEMA"
Add data to the database using the following commands:
az mysql flexible-server execute \
--name "$LOCATIONS_DB" \
--admin-user "$MARIADB_ADMIN_USER" \
--admin-password "$MARIADB_ADMIN_PWD" \
--database-name "$LOCATIONS_DB_SCHEMA" \
--file-path "grpc-locations/deploy/db-init/initialize-tables.sql"
You can check the content of the table with the following command:
az mysql flexible-server execute \
--name "$LOCATIONS_DB" \
--admin-user "$MARIADB_ADMIN_USER" \
--admin-password "$MARIADB_ADMIN_PWD" \
--database-name "$LOCATIONS_DB_SCHEMA" \
--querytext "select * from locations"
If you'd like to see the connection string to the databases (so you can access your database from an external SQL client), use the following command:
az mysql flexible-server show-connection-string \
--database-name "$LOCATIONS_DB_SCHEMA" \
--server-name "$LOCATIONS_DB" \
--admin-user "$MARIADB_ADMIN_USER" \
--admin-password "$LOCATIONS_DB_SCHEMA" \
--query "connectionStrings.jdbc" \
--output tsv
Important
This isn't the actual connection string used.
You also need to append ?sslmode=trust&useMysqlMetadata=true
to the end of each connect string to force the driver to use ssl and to use MySQL metadata.
This command is just here for your own examination purposes.
We need to create a MongoDB so the Fight microservice can store data. Create a database in the region where it's available:
az cosmosdb create \
--resource "$RESOURCE_GROUP" \
--locations regionName="$LOCATION" failoverPriority=0 \
--tags system="$TAG_SYSTEM" application="$FIGHTS_APP" \
--name "$MONGO_DB" \
--kind MongoDB \
--server-version "$MONGO_DB_VERSION"
Create the Fight collection:
az cosmosdb mongodb database create \
--resource-group "$RESOURCE_GROUP" \
--account-name "$MONGO_DB" \
--name "$FIGHTS_DB_SCHEMA"
To configure the Fight microservice we will need to set the MongoDB connection string. To get this connection string use the following command:
MONGO_CONNECTION_STRING=$(az cosmosdb keys list \
--resource-group "$RESOURCE_GROUP" \
--name "$MONGO_DB" \
--type connection-strings \
--query "connectionStrings[?description=='Primary MongoDB Connection String'].connectionString" \
--output tsv)
echo $MONGO_CONNECTION_STRING
The Fight microservice communicates with the Statistics microservice through Kafka. We need to create an Azure event hub for that.
az eventhubs namespace create \
--resource-group "$RESOURCE_GROUP" \
--location "$LOCATION" \
--tags system="$TAG_SYSTEM" application="$FIGHTS_APP" \
--name "$KAFKA_NAMESPACE"
Then, create the Kafka topic where the messages will be sent to and consumed from:
az eventhubs eventhub create \
--resource-group "$RESOURCE_GROUP" \
--name "$KAFKA_TOPIC" \
--namespace-name "$KAFKA_NAMESPACE"
To configure Kafka in the Fight and Statistics microservices, get the connection string with the following commands:
KAFKA_CONNECTION_STRING=$(az eventhubs namespace authorization-rule keys list \
--resource-group "$RESOURCE_GROUP" \
--namespace-name "$KAFKA_NAMESPACE" \
--name RootManageSharedAccessKey \
--output json | jq -r .primaryConnectionString)
JAAS_CONFIG='org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="'
KAFKA_JAAS_CONFIG="${JAAS_CONFIG}${KAFKA_CONNECTION_STRING}\";"
echo $KAFKA_CONNECTION_STRING
echo $KAFKA_JAAS_CONFIG
Messages sent and consumed from Kafka need to be validated against a schema.
These schemas are deployed to Apicurio.
Notice the --min-replicas 1
so Apicurio does not scale to 0 and is always available:
az containerapp create \
--resource-group "$RESOURCE_GROUP" \
--tags system="$TAG_SYSTEM" application="$FIGHTS_APP" \
--image "$APICURIO_IMAGE" \
--name "$APICURIO_APP" \
--environment "$CONTAINERAPPS_ENVIRONMENT" \
--ingress external \
--target-port 8080 \
--min-replicas 1 \
--env-vars REGISTRY_AUTH_ANONYMOUS_READ_ACCESS_ENABLED=true
Get the Apicurio URL with the following command:
APICURIO_URL="https://$(az containerapp ingress show \
--resource-group "$RESOURCE_GROUP" \
--name "$APICURIO_APP" \
--output json | jq -r .fqdn)"
echo $APICURIO_URL
And then, update the Apicurio container with these new variables:
az containerapp update \
--resource-group "$RESOURCE_GROUP" \
--name "$APICURIO_APP" \
--set-env-vars REGISTRY_UI_CONFIG_APIURL="${APICURIO_URL}/apis/registry" \
REGISTRY_UI_CONFIG_UIURL="${APICURIO_URL}/ui"
You can go to the Apicurio web console:
open $APICURIO_URL
The Narration microservice needs to access an AI service to generate the text narrating the fight. Azure OpenAI, or "OpenAI on Azure" is a service that provides REST API access to OpenAI’s models, including the GPT-4, GPT-3, Codex and Embeddings series. Azure OpenAI runs on Azure global infrastructure, which meets your production needs for critical enterprise security, compliance, and regional availability.
The create-azure-openai-resources.sh
script can be used to create the required Azure resources.
Similarly, the delete-azure-openai-resources.sh
script can be used to delete the Azure resources.
Warning
Keep in mind that the service may not be free.
First, create an Azure Cognitive Services using the following command:
az cognitiveservices account create \
--name "$COGNITIVE_SERVICE" \
--resource-group "$RESOURCE_GROUP" \
--location "$LOCATION" \
--custom-domain "$COGNITIVE_SERVICE" \
--tags system="$TAG" \
--kind OpenAI \
--sku S0 \
--yes
Then you need to deploy a GPT model to the Cognitive Services with the following command:
az cognitiveservices account deployment create \
--name "$COGNITIVE_SERVICE" \
--resource-group "$RESOURCE_GROUP" \
--deployment-name "$COGNITIVE_DEPLOYMENT" \
--model-name "$MODEL" \
--model-version "$MODEL_VERSION" \
--model-format OpenAI \
--sku-name Standard \
--sku-capacity 1
Then you need to get the Azure OpenAI key with the following command:
AZURE_OPENAI_KEY=$(
az cognitiveservices account keys list \
--name "$COGNITIVE_SERVICE" \
--resource-group "$RESOURCE_GROUP" \
| jq -r .key1
)
echo $AZURE_OPENAI_KEY
Take note of the AZURE_OPENAI_KEY
value for use in the step for deploying the narration microservice.
Now that the Azure Container Apps environment is all set, we need to deploy our microservices to Azure Container Apps. So let's create an instance of Container Apps for each of our microservices and User Interface.
The Heroes microservice needs to access the managed Postgres database.
Therefore, we need to set the right properties using our environment variables.
Notice that the Heroes microservice has a --min-replicas
set to 0.
That means it can scale down to zero if not used.
az containerapp create \
--resource-group "$RESOURCE_GROUP" \
--tags system="$TAG_SYSTEM" application="$HEROES_APP" \
--image "$HEROES_IMAGE" \
--name "$HEROES_APP" \
--environment "$CONTAINERAPPS_ENVIRONMENT" \
--ingress external \
--target-port 8083 \
--min-replicas 0 \
--env-vars QUARKUS_HIBERNATE_ORM_DATABASE_GENERATION=validate \
QUARKUS_HIBERNATE_ORM_SQL_LOAD_SCRIPT=no-file \
QUARKUS_DATASOURCE_USERNAME="$POSTGRES_DB_ADMIN" \
QUARKUS_DATASOURCE_PASSWORD="$POSTGRES_DB_PWD" \
QUARKUS_DATASOURCE_REACTIVE_URL="$HEROES_DB_CONNECT_STRING"
The following command sets the URL of the deployed application to the HEROES_URL
variable:
HEROES_URL="https://$(az containerapp ingress show \
--resource-group $RESOURCE_GROUP \
--name $HEROES_APP \
--output json | jq -r .fqdn)"
echo $HEROES_URL
You can now invoke the Hero microservice APIs with:
curl "$HEROES_URL/api/heroes/hello"
curl "$HEROES_URL/api/heroes" | jq
To access the logs of the Heroes microservice, you can write the following query:
az containerapp logs show \
--name "$HEROES_APP" \
--resource-group "$RESOURCE_GROUP" \
--output table
The Villain microservice also needs to access the managed Postgres database, so we need to set the right variables. Notice the minimum of replicas is also set to 0:
az containerapp create \
--resource-group "$RESOURCE_GROUP" \
--tags system="$TAG_SYSTEM" application="$VILLAINS_APP" \
--image "$VILLAINS_IMAGE" \
--name "$VILLAINS_APP" \
--environment "$CONTAINERAPPS_ENVIRONMENT" \
--ingress external \
--target-port 8084 \
--min-replicas 0 \
--env-vars QUARKUS_HIBERNATE_ORM_DATABASE_GENERATION=validate \
QUARKUS_HIBERNATE_ORM_SQL_LOAD_SCRIPT=no-file \
QUARKUS_DATASOURCE_USERNAME="$POSTGRES_DB_ADMIN" \
QUARKUS_DATASOURCE_PASSWORD="$POSTGRES_DB_PWD" \
QUARKUS_DATASOURCE_JDBC_URL="$VILLAINS_DB_CONNECT_STRING"
The following command sets the URL of the deployed application to the VILLAINS_URL
variable:
VILLAINS_URL="https://$(az containerapp ingress show \
--resource-group $RESOURCE_GROUP \
--name $VILLAINS_APP \
--output json | jq -r .fqdn)"
echo $VILLAINS_URL
You can now invoke the Hero microservice APIs with:
curl "$VILLAINS_URL/api/villains/hello"
curl "$VILLAINS_URL/api/villains" | jq
To access the logs of the Villain microservice, you can write the following query:
az containerapp logs show \
--name "$VILLAINS_APP" \
--resource-group "$RESOURCE_GROUP" \
--output table
The Location microservice needs to access the managed MariaDB database, so we need to set the right variables.
az containerapp create \
--resource-group "$RESOURCE_GROUP" \
--tags system="$TAG_SYSTEM" application="$VILLAINS_APP" \
--image "$LOCATIONS_IMAGE" \
--name "$LOCATIONS_APP" \
--environment "$CONTAINERAPPS_ENVIRONMENT" \
--ingress external \
--transport http2 \
--target-port 8089 \
--min-replicas 1 \
--env-vars QUARKUS_HIBERNATE_ORM_DATABASE_GENERATION=validate \
QUARKUS_HIBERNATE_ORM_SQL_LOAD_SCRIPT=no-file \
QUARKUS_DATASOURCE_USERNAME="$MARIADB_ADMIN_USER" \
QUARKUS_DATASOURCE_PASSWORD="$MARIADB_ADMIN_PWD" \
QUARKUS_DATASOURCE_JDBC_URL="$LOCATIONS_DB_CONNECT_STRING"
The following command sets the host of the deployed application to the LOCATIONS_HOST
variable:
LOCATIONS_HOST="$(az containerapp ingress show \
--resource-group $RESOURCE_GROUP \
--name $LOCATIONS_APP \
--output json | jq -r .fqdn)"
echo $LOCATIONS_HOST
You can now invoke the Location microservice APIs with:
grpcurl "$LOCATIONS_HOST:443" list
grpcurl "$LOCATIONS_HOST:443" io.quarkus.sample.superheroes.location.v1.Locations.GetRandomLocation | jq
To access the logs of the Location microservice, you can write the following query:
az containerapp logs show \
--name "$LOCATIONS_APP" \
--resource-group "$RESOURCE_GROUP" \
--output table
The narration microservice communicates with the Azure OpenAI service.
You'll need the AZURE_OPENAI_KEY
value from the Create the Azure OpenAI resources step.
az containerapp create \
--resource-group "$RESOURCE_GROUP" \
--tags system="$TAG_SYSTEM" application="$NARRATION_APP" \
--image "${NARRATION_IMAGE}-azure-openai" \
--name "$NARRATION_APP" \
--environment "$CONTAINERAPPS_ENVIRONMENT" \
--ingress external \
--target-port 8087 \
--min-replicas 1 \
--env-vars QUARKUS_PROFILE=azure-openai \
QUARKUS_LANGCHAIN4J_AZURE_OPENAI_API_KEY="$AZURE_OPENAI_KEY" \
QUARKUS_LANGCHAIN4J_AZURE_OPENAI_RESOURCE_NAME="$COGNITIVE_SERVICE" \
QUARKUS_LANGCHAIN4J_AZURE_OPENAI_DEPLOYMENT_ID="$COGNITIVE_DEPLOYMENT"
The following command sets the URL of the deployed application to the NARRATION_URL
variable:
NARRATION_URL="https://$(az containerapp ingress show \
--resource-group $RESOURCE_GROUP \
--name $NARRATION_APP \
--output json | jq -r .fqdn)"
echo $NARRATION_URL
You can now invoke the Narration microservice APIs with:
curl "$NARRATION_URL/api/narration/hello"
curl -X POST -d '{"winnerName" : "Super winner", "winnerLevel" : 42, "winnerPowers" : "jumping", "loserName" : "Super loser", "loserLevel" : 2, "loserPowers" : "leaping", "winnerTeam" : "heroes", "loserTeam" : "villains", "location" : {"name" : "Tatooine", "description" : "desert planet"}}' -H "Content-Type: application/json" "$NARRATION_URL/api/narration"
To access the logs of the Narration microservice, you can write the following query:
az containerapp logs show \
--name "$NARRATION_APP" \
--resource-group "$RESOURCE_GROUP" \
--output table
The Statistics microservice listens to a Kafka topics and consumes all the fights.
The fight messages are defined by an Avro schema stored in Apicurio (APICURIO_URL
and we append /apis/registry/v2
):.
Notice that we use the value of the $$KAFKA_JAAS_CONFIG
in the password
.
az containerapp create \
--resource-group "$RESOURCE_GROUP" \
--tags system="$TAG_SYSTEM" application="$STATISTICS_APP" \
--image "$STATISTICS_IMAGE" \
--name "$STATISTICS_APP" \
--environment "$CONTAINERAPPS_ENVIRONMENT" \
--ingress external \
--target-port 8085 \
--min-replicas 0 \
--env-vars KAFKA_BOOTSTRAP_SERVERS="$KAFKA_BOOTSTRAP_SERVERS" \
KAFKA_SECURITY_PROTOCOL=SASL_SSL \
KAFKA_SASL_MECHANISM=PLAIN \
KAFKA_SASL_JAAS_CONFIG="$KAFKA_JAAS_CONFIG" \
MP_MESSAGING_CONNECTOR_SMALLRYE_KAFKA_APICURIO_REGISTRY_URL="${APICURIO_URL}/apis/registry/v2"
The following command sets the URL of the deployed application to the STATISTICS_URL
variable:
STATISTICS_URL="https://$(az containerapp ingress show \
--resource-group $RESOURCE_GROUP \
--name $STATISTICS_APP \
--output json | jq -r .fqdn)"
echo $STATISTICS_URL
You can now display the Statistics UI with:
open "$STATISTICS_URL"
To access the logs of the Statistics microservice, you can write the following query:
az containerapp logs show \
--name "$STATISTICS_APP" \
--resource-group "$RESOURCE_GROUP" \
--output table
The Fight microservice invokes the Heroes and Villains microserivces, sends fight messages to a Kafka topics and stores the fights into a MongoDB database.
We need to configure Kafka (same connection string as the one used by the Statistics microservice) as well as Mongo and Apicurio (variable APICURIO_URL
and append apis/registry/v2
).
As for the microservice invocations, you need to set the URLs of both Heroes and Villains microservices.
az containerapp create \
--resource-group "$RESOURCE_GROUP" \
--tags system="$TAG_SYSTEM" application="$FIGHTS_APP" \
--image "$FIGHTS_IMAGE" \
--name "$FIGHTS_APP" \
--environment "$CONTAINERAPPS_ENVIRONMENT" \
--ingress external \
--target-port 8082 \
--min-replicas 1 \
--env-vars KAFKA_BOOTSTRAP_SERVERS="$KAFKA_BOOTSTRAP_SERVERS" \
KAFKA_SECURITY_PROTOCOL=SASL_SSL \
KAFKA_SASL_MECHANISM=PLAIN \
KAFKA_SASL_JAAS_CONFIG="$KAFKA_JAAS_CONFIG" \
MP_MESSAGING_CONNECTOR_SMALLRYE_KAFKA_APICURIO_REGISTRY_URL="${APICURIO_URL}/apis/registry/v2" \
QUARKUS_LIQUIBASE_MONGODB_MIGRATE_AT_START=false \
QUARKUS_MONGODB_CONNECTION_STRING="$MONGO_CONNECTION_STRING" \
QUARKUS_REST_CLIENT_HERO_CLIENT_URL="$HEROES_URL" \
QUARKUS_REST_CLIENT_NARRATION_CLIENT_URL="$NARRATION_URL" \
QUARKUS_GRPC_CLIENTS_LOCATIONS_HOST="$LOCATIONS_HOST" \
QUARKUS_GRPC_CLIENTS_LOCATIONS_PORT=443 \
QUARKUS_GRPC_CLIENTS_LOCATIONS_PLAIN_TEXT=false \
FIGHT_VILLAIN_CLIENT_BASE_URL="$VILLAINS_URL"
The following command sets the URL of the deployed application to the FIGHTS_URL
variable:
FIGHTS_URL="https://$(az containerapp ingress show \
--resource-group $RESOURCE_GROUP \
--name $FIGHTS_APP \
--output json | jq -r .fqdn)"
echo $FIGHTS_URL
Use the following curl commands to access the Fight microservice. Remember that we've set the minimum replicas to 0. That means that pinging the Hero and Villain microservices might fallback (you will get a _That means that pinging the Hero and Villain microservices might fallback (you will get a That means that pinging the Hero and Villain microservices might fallback (you will get a Could not invoke the Villains microservice message). Execute several times the same curl commands so Azure Containers Apps has time to instantiate one replica and process the requests:
curl "$FIGHTS_URL/api/fights/hello"
curl "$FIGHTS_URL/api/fights/hello/villains"
curl "$FIGHTS_URL/api/fights/hello/heroes"
curl "$FIGHTS_URL/api/fights" | jq
curl "$FIGHTS_URL/api/fights/randomfighters" | jq
To access the logs of the Fight microservice, you can write the following query:
az containerapp logs show \
--name "$FIGHTS_APP" \
--resource-group "$RESOURCE_GROUP" \
--output table
We can deploy the Super Hero UI in two different ways:
- Using Azure Container Apps and deploying the Docker image as we did for the previous microservices
- Using Azure Static Webapps witch is suited for Angular applications
az containerapp create \
--resource-group "$RESOURCE_GROUP" \
--tags system="$TAG_SYSTEM" application="$UI_APP" \
--image "$UI_IMAGE" \
--name "$UI_APP" \
--environment "$CONTAINERAPPS_ENVIRONMENT" \
--ingress external \
--target-port 8080 \
--env-vars API_BASE_URL="$FIGHTS_URL"
UI_URL="https://$(az containerapp ingress show \
--resource-group $RESOURCE_GROUP \
--name $UI_APP \
--output json | jq -r .fqdn)"
echo $UI_URL
open "$UI_URL"
If you are building the UI locally with Node 17 you have to set the NODE_OPTIONS
variable:
node version
export NODE_OPTIONS=--openssl-legacy-provider
Then, to execute the app locally, set API_BASE_URL
with the same value of the Fight microservice URL (so it accesses the remote Fight microservice):
export API_BASE_URL=https://${FIGHT_URL}/api
ui-super-heroes$ java -jar target/quarkus-app/quarkus-run.jar
You can check the URL is correctly set with:
curl http://localhost:8080/env.js
Then, we will deploy the Angular application using Azure Static Webapps. This creates a GitHub action and deploys the application each time you push the code:
az staticwebapp create \
--resource-group $RESOURCE_GROUP \
--location $LOCATION \
--name $UI_APP \
--source https://github.com/agoncal/quarkus-super-heroes \
--branch azure \
--app-location /ui-super-heroes \
--login-with-github
If you have an issue with secrets, you can list the secrets that exist for the static web app:
az staticwebapp secrets list \
--resource-group $RESOURCE_GROUP \
--name $UI_APP \
--output table
Now time to add some load to the application. This way, we will be able to see the auto-scaling in Azure Container Apps.
To add some load to an application, you can do it locally using JMeter, but you can also do it remotely on Azure using Azure Load Testing and JMeter. Azure Load Testing is a fully managed load-testing service built for Azure that makes it easy to generate high-scale load and identify app performance bottlenecks. It is available on the Azure Marketplace. For that, we need to go the
To use Azure Load Testing, go to the Azure Portal, search for the Marketplace and look for "Azure Load Testing" in the Marketplace. Click on "Create":
Create a load testing resource by giving it a unique name (eg. super-heroes-load-testing
with your id), a location, and a resource group.
Click on "Create":
Creating a load testing resource can take a few moment. Once created, you should see the Azure Load Testing available in your resource group:
Select super-heroes-load-testing
and click on "Tests" and then "Create".
You can either create a quick load test using a wizard, or create a load test using a JMeter script.
Choose this second option:
Before uploading a JMeter script, create a load test by entering a name (eg. "Make them fight"), a description and click "Next: Test plan >:
The JMeter file that will upload (located under scripts/jmeter/src/test/jmeter/fight.jmx
) sets up a load campaign targeting the "Fight" microservice.
Basically, it will invoke the FightResource
endpoint so super heroes and super villains will fight.
Before uploading the user.properties
file, make sure you change the properties so you target the FightResource
endpoint URL:
# Change these numbers depending on the load you want to add to the application
LOOPS=20
THREADS=2
RAMP=1
# Azure
FIGHT_PROTOCOL=https
FIGHT_PORT=443
# Change the host depending on your settings
FIGHT_HOST=rest-fights.kindocean-1cba89db.eastus.azurecontainerapps.io
Now, go to the "Test plan" menu, and upload the JMeter file fight.jmx
Then, upload the user.properties
file (select "User properties" under "File Relevance").
Execute the test and you will get some metrics:
Go back to the resource group of your Azure portal and select the rest-fights
app.
Click on "Metrics".
Add the "Replica Count" metric to the dashboard.
You will notice that the number of replicas have increased from 1 replica to 10.
Azure Container Apps has scaled automatically the application depending on the load.
If you need to restart a microservice, you need to actually restart the active revision. For that, first get the active revision:
az containerapp revision list \
--resource-group $RESOURCE_GROUP \
--name $FIGHTS_APP \
--output table
Then, restart it:
az containerapp revision restart \
--resource-group $RESOURCE_GROUP \
--app $FIGHTS_APP \
--name rest-fights-app--mh396rg
If you need to push a new version of a Docker image, make sure it has a different tag. Then, update the container with this new tagged image:
az containerapp update \
--resource-group $RESOURCE_GROUP \
--image quay.io/quarkus-super-heroes/rest-fights:azure2 \
--name $FIGHTS_APP
az containerapp update \
--resource-group $RESOURCE_GROUP \
--name $APICURIO_APP \
--set-env-vars QUARKUS_LOG_LEVEL=DEBUG