As a system administrator of applications and services, you can use Infrastructure as Code (IaC) tools to automate deployment and management of architectures.
This guide describes how to use different IaC tools to provision and manage Kafka instances in OpenShift Streams for Apache Kafka. To interact with Kafka instances, the IaC tools use two REST APIs that are available in Streams for Apache Kafka - a management API and an instance API.
For more information about creating and setting up Kafka instances, see Getting started with OpenShift Streams for Apache Kafka.
Ansible is an open source configuration management and automation tool.
Ansible uses modules to execute tasks. The output of one module can be the input for the next one or they can be independent of each other.
You can combine one or more Ansible modules to make a play. Combine two or more plays to create an Ansible playbook, a list of sequential tasks (written in YAML) that automatically executes when you run the ansible-playbook
command. You can use an Ansible playbook to describe the desired state of your infrastructure and then have Ansible provision it.
An Ansible role is a special kind of playbook that is fully self-contained and portable with the tasks, variables, configuration templates, and other supporting files that are needed to complete your chosen tasks. Multiple roles can exist inside a collection, allowing easy sharing of content through Automation Hub and Ansible Galaxy.
The following procedure shows how to install the OpenShift Application Services (RHOAS) Ansible collection allowing you to fully manage your Kafka environment with Ansible.
-
Python 3.9 or later is installed.
-
Ansible 2.9 or later is installed.
-
Some additional dependencies required by this collection are installed. To install these, enter the following commands:
$ pip install rhoas-sdks $ pip install python-dotenv
NoteUse the pip install rhoas-sdks --force reinstall
command to ensure that the most up to date versions of the SDKs are installed.
-
Open a new terminal window.
-
Create a Python virtual environment.
$ python3 -m venv rhoas
-
Activate the virtual environment.
$ . rhoas/bin/activate
-
Install the RHOAS Ansible collection.
$ ansible-galaxy collection install rhoas.rhoas
By default, the
ansible-galaxy collection install
command useshttps://galaxy.ansible.com
as the Galaxy server URL. The URL is specified as the value of theGALAXY_SERVER
parameter in theansible.cfg
configuration file.A message states that the RHOAS collection was installed successfully to the following path:
home/<usr>/.ansible/collections/ansible_collections/rhoas/rhoas
. All modules for the collection can be found in this location.When installed, you can reference collection content by the fully qualified collection name (FQCN) of the collection. The command in a task needs to include the FQCN of the module so it always points to the intended collection. For example, the FQCN for the
create_kafka
command isrhoas.rhoas.create_kafka
.NoteTo find documentation for each module, open a terminal window, enter
ansible-doc rhoas.rhoas.<module_name>
and replace <module_name> with the name of the module in the terminal. The following example outputs the documentation for thecreate_kafka
module. The documentation shows required parameters (denoted by=
) and optional parameters (denoted by-
) for the module, as well as examples of how to use the module and the output. If you do not pass in any values for optional fields, Ansible uses the default values from the Kafka APIs.$ ansible-doc rhoas.rhoas.create_kafka
You can create, view, configure, and delete the following OpenShift Streams for Apache Kafka resources with the OpenShift Application Services (RHOAS) collection:
-
Kafka instances
-
Service accounts
-
Kafka topics
-
Access Control Lists (ACLs)
-
You have a Red Hat account.
-
You have an offline token used to authenticate the Ansible modules with the OpenShift Application Services API.
Note
|
The offline token is a refresh token with no expiry and can be used by non-interactive processes to provide an access token for OpenShift Application Services to authenticate the user. The token is an OpenShift offline token and you can find it at https://cloud.redhat.com/openshift/token. |
-
In your IDE, create a new environment variables (
.env
) file. -
In the
.env
file, add the following variables required by the Ansible collection:API_BASE_HOST
-
This is the base URL for the API. For example,
https://api.openshift.com
. SSO_BASE_HOST
-
This is the base URL for the Red Hat single sign-on (SSO) service. For example,
https://sso.redhat.com/auth/realms/redhat-external
.
-
Save the file in the root directory of the Ansible collection.
NoteIf you do not explicitly define the environment variables, the collection uses the URLs shown in the preceding step. -
You can run a module in a terminal window to perform a single task. For example, the following command shows how to run a module that creates a Kafka instance. Replace <OFFLINE_TOKEN> with your OpenShift offline token.
Example create_kafka module$ ansible localhost -m rhoas.rhoas.create_kafka -a 'name=unique-kafka-name billing_model=standard cloud_provider=aws plan="developer.x1" region="us-east-1" openshift_offline_token=<OFFLINE_TOKEN>'
Ansible runs the
rhoas.rhoas.create_kafka
task in the terminal and creates the instance.You can also create a playbook of sequential tasks and use Ansible to automatically execute those tasks. The steps in the next section show you how to create and run an example playbook.
The example playbook in this section includes modules for creating a Kafka instance, topic, and service account, and for assigning permissions to the service account. You also configure modules to delete the resources that you previously created.
-
To open the example playbook used in this section, use your IDE to navigate to the
rhoas
folder in the.ansible
directory. -
Open the
rhoas_test
example playbook.NoteYou can also open a web browser to view the
rhoas_test
playbook. -
Inspect the contents of the example playbook. In particular, observe that the playbook has modules for certain tasks. You will use these modules to create a new playbook that performs the following tasks:
-
Creating and deleting a Kafka instance
-
Creating and deleting a service account
-
Creating Access Control List (ACL) permission bindings
-
Creating, updating, and deleting a topic
NoteThe playbook uses your offline token to authenticate with the Kafka Management API. If you do not specify the token as an argument for a given task, the module attempts to read it from the OFFLINE_TOKEN
environment variable.NoteThe example playbook used in this section includes comments that indicate how to directly specify values rather than fetching them dynamically. For example, to specify a Kafka instance ID, a comment in the playbook states that you can include the following line:
kafka_id: <kafka_id>
-
-
In your IDE, create a new playbook file and save it as
rhoas_kafka.yml
in therhoas
directory. -
At the start of the new playbook, you define the name of the playbook and the group of hosts on which to run it. Copy the following example and paste it into the
rhoas_kafka.yml
file. In this example, the host is a local host and the connection is local because you are in a virtual environment.Example first section forrhoas_kafka
playbook- name: RHOAS kafka hosts: localhost gather_facts: false connection: local tasks:
To manage a Kafka instance using Ansible, you can use the create_kafka
module and run it as part of a playbook or as a single module.
-
You have a Red Hat account.
-
You have an offline token that authenticates the Ansible modules with the OpenShift Application Services API.
-
To add a module that creates a new Kafka instance to a playbook, copy the
create_kafka
module shown in the following example and paste it into thetasks:
section of yourrhoas_kafka.yml
file.Examplecreate_kafka
module- name: Create kafka rhoas.rhoas.create_kafka: name: "kafka-name" instance_type: "x1" billing_model: "standard" cloud_provider: "aws" region: "us-east-1" plan: "developer.x1" billing_cloud_account_id: "123456789" openshift_offline_token: "OPENSHIFT_OFFLINE_TOKEN" register: kafka_req_resp
-
In the
name
field, enter a name for the Kafka instance. -
In the
billing_cloud_account_id
, enter the billing cloud account ID. -
In the
openshift_offline_token
field, enter your OpenShift offline token.All other information for the instance is provided by the Kafka APIs.
When you run the
create_kafka_
module as part of the playbook at the end of these steps, Ansible saves the output of that command in a variable in theregister
field. In the preceding example, Ansible saves the created Kafka instance askafka_req_resp
.
To connect your applications or services to a Kafka instance in Streams for Apache Kafka, you must first create a service account with credentials.
-
You have an offline token that authenticates the Ansible modules with the OpenShift Application Services API.
-
You have created a Kafka instance.
-
To add a module that creates a service account, copy the
create_service_account
module shown in the following example and paste it into therhoas_kafka.yml
file.Examplecreate_service_account
module- name: Create Service Account create_service_account: name: "service-account-name" description: "This is a description of the service account" openshift_offline_token: "OPENSHIFT_OFFLINE_TOKEN" register: srvce_acc_resp_obj
-
Enter values for the
name
,short description
, andopenshift_offline_token
fields.When you run the
create_service_account
module as part of the playbook at the end of these steps, Ansible populates the generated service account credentials in theclient_id
andclient_secret
fields in the terminal after it creates the service account.
After you create a service account to connect to a Kafka instance, you must also set the appropriate level of access for that new account in an Access Control List (ACL) for the Kafka instance.
-
You have an offline token that authenticates the Ansible modules with the OpenShift Application Services API.
-
You have created a Kafka instance.
-
You understand how Access Control Lists (ACLs) enable you to manage how user accounts and service accounts can access the Kafka resources that you create. See Managing account access in OpenShift Streams for Apache Kafka.
-
To create Access Control List (ACL) permissions for the service account and bind that ACL to the Kafka instance, copy the
create_kafka_acl_binding
module shown in the following example and paste it in yourrhoas_kafka.yml
file.Examplecreate_kafka_acl_binding
module- name: Create kafka ACL Service Binding rhoas.rhoas.create_kafka_acl_binding: kafka_id: "{{ kafka_req_resp.kafka_id }}" # To hard code the kafka_id, uncomment and use the following line: # kafka_id: "KAFKA_ID" principal: " {{ srvce_acc_resp_obj['client_id'] }}" # To hard code the principal_id, uncomment and use the following line: # principal: "PRINCIPAL_ID" resource_name: "topic-name" resource_type: "Topic" pattern_type: "PREFIXED" operation_type: "all" permission_type: "allow" openshift_offline_token: "OPENSHIFT_OFFLINE_TOKEN" register: kafka_acl_resp
-
To directly specify a Kafka instance ID, enter a value in the
kafka_id
field. Otherwise, Ansible gets the Kafka ID from thekafka_req_resp.id
variable. -
In the
openshift_offline_token
field, enter your OpenShift offline token. -
Consider whether you need to specify your own value for any of the fields in the following list. These fields must all have values in an ACL binding module.
principal
-
The user or service account that this binding applies to. This example uses the service account client ID.
resource_name
-
The Kafka resource that you are granting access to. This example specifies the Kafka topic that you create in the playbook.
resource_type
-
The type of resource you grant access to. This example uses Topic.
pattern_type
-
The type of pattern of the ACL. This example uses the
PREFIXED
pattern type meaning that Kafka will try to match the prefix of the resource name with the resource specified in the ACL. operation_type
-
The type of operation (an action performed on a resource) that is allowed for the given user on this module.
permission_type
-
Whether permission is given to the user or taken away.
After you create a Kafka instance, you can create Kafka topics to start producing and consuming messages in your applications and services.
-
You have an offline token that authenticates the Ansible modules with the Kafka Management API.
-
You have created a Kafka instance.
-
To create a Kafka topic, copy the contents of the
create_kafka_topic
module shown in the following example and paste it into therhoas_kafka.yml
file.Examplecreate_kafka_topic
module- name: Create Kafka Topic create_kafka_topic: topic_name: "kafka-topic-name" kafka_id: "{{ kafka_req_resp.id }}" # To hard code the kafka_id, uncomment and use the following line: # kafka_id: "KAFKA_ID" partitions: 1 retention_period_ms: "86400000" retention_size_bytes: "1073741824" cleanup_policy: "compact" openshift_offline_token: "OPENSHIFT_OFFLINE_TOKEN" register: create_topic_res_obj
-
To directly specify a Kafka instance ID, enter a value in the
kafka_id
field. Otherwise, Ansible gets the Kafka ID from thekafka_req_resp.id
variable. -
In the
openshift_offline_token
field, enter your OpenShift offline token. -
To update the configuration of the topic, copy the
update_kafka_topic
module shown in the following example and paste it into therhoas_kafka
file. In the following example, the cleanup policy has been updated from compact to delete by replacing"compact"
with"delete"
in thecleanup_policy
field.Exampleupdate_kafka_topic
module- name: Update Kafka Topic update_kafka_topic: topic_name: "kafka-topic-name" kafka_id: "{{ kafka_req_resp.id }}" # To hard code the kafka_id, uncomment and use the following line: # kafka_id: "KAFKA_ID" partitions: 1 retention_period_ms: "86400000" retention_size_bytes: "1073741824" cleanup_policy: "delete" openshift_offline_token: "OPENSHIFT_OFFLINE_TOKEN" register: update_topic_res_obj
-
To directly specify a Kafka instance ID, enter a value in the
kafka_id
field. Otherwise, Ansible gets the Kafka ID from thekafka_req_resp.id
variable. -
(Optional) You can modify the values in the
retention_period_ms
andretention_size_bytes
fields instead of accepting the default values.
To delete a topic, use the delete_kafka_topic
module.
-
Copy the
delete_kafka_topic
module shown in the following example and paste it into therhoas_kafka.yml
file.Exampledelete_kafka_topic
module- name: Delete Kafka Topic rhoas.rhoas.delete_kafka_topic: topic_name: "KAFKA_TOPIC_NAME" kafka_id: "{{ kafka_req_resp_obj['kafka_id'] }}" # To hard code the kafka_id, uncomment and use the following line: # kafka_id: "KAFKA_ID" openshift_offline_token: "OPENSHIFT_OFFLINE_TOKEN"
-
In the
openshift_offline_token
field, enter your OpenShift offline token. -
To directly specify a Kafka instance ID, enter a value in the
kafka_id
field. Otherwise, Ansible gets the Kafka ID from thekafka_req_resp.id
variable.
To delete a service account, use the delete_service_account_by_id
module.
-
You have created a service account.
-
To delete the service account, copy the
delete_service_account_by_id
module shown in the following example and paste it into therhoas_kafka.yml
file. -
To directly specify a service account ID, enter a value in the
service_account_id
field. Otherwise, Ansible gets the service account ID from thesrvce_acc_resp_obj
variable.Exampledeleting_service_account_by_id
module- name: Delete Service Account rhoas.rhoas.delete_service_account_by_id: # service_account_id: "service_account_id" service_account_id: "{{ srvce_acc_resp_obj['client_id'] }}" # openshift_offline_token: "OFFLINE_TOKEN"
To delete a Kafka instance, use the delete_kafka_by_id
module.
-
You have created a Kafka instance.
-
To deprovision and delete the Streams for Apache Kafka instance, copy the
delete_kafka_by_id
module shown in the following example and paste it into therhoas_kafka.yml
file.Exampledelete_kafka_by_id
module- name: Delete kafka instance by ID rhoas.rhoas.delete_kafka_by_id: kafka_id: "{{ kafka_req_resp_obj['kafka_id'] }}" openshift_offline_token: "offline_token"
-
In the
openshift_offline_token
field, enter your OpenShift offline token.
When you have finished creating the resources, run the playbook.
-
Save any changes you have made to the the example
rhoas_test
playbook. -
Open a terminal and enter the following command to run the playbook:
$ ansible-playbook rhoas_kafka.yml
The playbook runs through the tasks sequentially, generating output in the terminal window. When finished, Ansible displays a
PLAY RECAP
message stating that all 8 tasks have anok
status, meaning they have all run successfully.
HashiCorp Terraform is an Infrastructure as Code (Iac) tool that enables you to build and change infrastructure safely and efficiently through human-readable configuration files that you can create versions of, reuse, and share. You can then use a consistent workflow to provision and manage all of your infrastructure throughout its lifecycle.
You can fully manage your Kafka environment through your Terraform system using the OpenShift Application Services (RHOAS) Terraform provider, which is available in the official Terraform provider registry.
You can create, view, configure, and delete the following Streams for Apache Kafka resources:
-
Kafka instances
-
Service accounts
-
Kafka topics
-
Access Control Lists (ACLs)
-
You have a Red Hat account.
-
Terraform v1.3.4 or later is installed.
-
In your web browser, navigate to the OpenShift Application Services (RHOAS) Terraform provider.
-
In the upper-right corner of the RHOAS Terraform provider registry, click Use Provider.
A pane opens that shows the configuration you need to use the RHOAS Terraform provider.
-
In the pane that opened, copy the configuration shown. The following lines show an example of the configuration.
Example RHOAS provider configurationterraform { required_providers { rhoas = { source = "redhat-developer/rhoas" version = "0.4.0" } } } provider "rhoas" { #configuration options }
-
In your IDE, open a new file and paste the configuration you copied. You can specify configuration options in the
provider
section. -
Save the file as a Terraform configuration file called
main.tf
in a directory of your choice. This procedure uses a directory calledTerraform
. -
Open a terminal and navigate to the
Terraform
directory.$ cd Terraform
-
Enter the following command. This command initializes the working directory that contains Terraform configuration files and installs any required plug-ins.
$ terraform init
When the Terraform provider has been initialized, you see a confirmation message.
Resources are the most important element in the Terraform language. Each resource block in a Terraform provider describes one or more infrastructure objects. For OpenShift Streams for Apache Kafka, such infrastructure objects might include Kafka instances, service accounts, Access Control Lists (ACLs), and topics. The procedures that follow show what resources you can add to your Terraform configuration file to create a Kafka instance and its associated resources such as service accounts and topics.
To manage a Kafka instance using Terraform, use the rhoas_kafka
resource.
-
You have a Red Hat account.
-
You have an offline token that authenticates the Terraform resources with the OpenShift Application Services API.
Note
|
The offline token is a refresh token with no expiry and can be used by non-interactive processes to provide an access token for OpenShift Application Services to authenticate the user. The token is an OpenShift offline token and you can find it at https://cloud.redhat.com/openshift/token. Because the offline token is a sensitive value that varies between environments it is best specified as an
|
-
Open the
main.tf
file in your IDE for editing. -
Copy the
rhoas_kafka
resource shown in the following example and paste it into themain.tf
file after the provider configuration. This example uses the"my-instance"
identifier and creates a Kafka instance calledmy-instance
.NoteIn the following examples, the identifier and the name of the infrastructure object are the same for demonstration purposes only. You can choose different values for each field. Examplerhoas_kafka
resourceresource "rhoas_kafka" "my-instance" { name = "my-instance" plan = "developer.x1" billing_model = "standard" } output "bootstrap_server_my-instance" { value = rhoas_kafka.my-instance.bootstrap_server_host }
-
Save your changes.
-
Open a terminal and apply the changes you made to your Terraform provider configuration.
$ terraform apply
In the terminal, Terraform displays a message that
rhoas_kafka.my-instance
will be created. Terraform automatically sets values forcloud provider
andregion
in the terminal. All other information for the instance is provided by the Kafka APIs. -
When you’re ready to create your instance, type yes. The generated bootstrap server URL appears in the terminal as an output.
NoteRunning terraform apply
for the first time also creates the Terraform state file. Terraform logs information about the resources it has created in this state file. This allows Terraform to know which resources are under its control and when to update and delete them. The Terraform state file is namedterraform.tfstate
by default and is kept in the same directory where Terraform is run. Sensitive information such as the offline token, client ID, and client secret can be found in theterraform.tfstate
file. Runningterraform apply
again updates this file. -
To verify Terraform successfully created your Kafka instance, in your web browser, open the Kafka Instances page of the Streams for Apache Kafka web console.
To connect your applications or services to a Kafka instance in Streams for Apache Kafka, you must first create a service account with credentials.
-
You have an offline token that authenticates the Terraform resources with the OpenShift Application Services API.
-
You have created a Kafka instance.
-
To create a service account, copy and paste the
rhoas_service_account
resource shown in the following example into themain.tf
file. This example uses the"my-service-account"
identifier and creates a service account calledmy-service-account
.Examplerhoas_service_account
resourceresource "rhoas_service_account" "my-service-account" { name = "my-service-account" description = "<description of service account>" } output "client_id" { value = rhoas_service_account.my-service-account.client_id } output "client_secret" { value = rhoas_service_account.my-service-account.client_secret sensitive = true }
-
Save your changes.
-
Apply the changes you made to your Terraform provider configuration.
$ terraform apply
In the terminal, Terraform displays a message that
rhoas_service_account.my-service-account
will be created. -
When you’re ready to create your service account, type yes. The generated client ID appears in the terminal as an output. The client secret does not appear because it is marked as a sensitive value.
-
To verify Terraform successfully created your service account, in your web browser, open the Service Accounts page of the Streams for Apache Kafka web console.
After you create a service account to connect to a Kafka instance, you must also set the appropriate level of access for that new account in an Access Control List (ACL) for the Kafka instance.
-
You have an offline token that authenticates the Terraform resources with the OpenShift Application Services API.
-
You have created a Kafka instance.
-
You understand how Access Control Lists (ACLs) enable you to manage how user accounts and service accounts can access the Kafka resources that you create. See Managing account access in OpenShift Streams for Apache Kafka.
-
To create Access Control List (ACL) permissions with some default values, copy and paste the
rhoas_acl
resource shown in the following example into themain.tf
file. This example uses the"acl"
identifier.ExampleACL binding
resourceresource "rhoas_acl" "acl" { kafka_id = rhoas_kafka.my-instance.id resource_type = "TOPIC" resource_name = "my-topic" pattern_type = "LITERAL" principal = rhoas_service_account.my-service-account.client_id operation_type = "ALL" permission_type = "ALLOW" }
-
Consider whether you need to specify your own value for any of the fields in the following list. These fields must all have values in an ACL binding resource.
resource_type
-
The type of resource you grant access to. This example uses
“TOPIC”
. resource_name
-
The name of the Kafka resource you grant access to. This example uses the name that is passed when creating the topic.
pattern_type
-
The type of pattern of the ACL. This example uses the
LITERAL
pattern type meaning that Kafka will try to match the match the full resource name (the topic) with the resource specified in the ACL. principal
-
The user or service account that this binding applies to. This example uses the service account client ID.
operation_type
-
The type of operation (an action performed on a resource) that is allowed for the given user on this resource.
permission_type
-
Whether permission is given or taken away.
-
Save your changes.
-
Apply the changes you made to your Terraform provider configuration.
$ terraform apply
In the terminal, Terraform displays a message that
rhoas_acl.acl.
will be created. -
When you’re ready to set your permissions, type yes.
After you create a Kafka instance, you can create Kafka topics to start producing and consuming messages in your applications and services.
-
You have an offline token that authenticates the Terraform resources with the OpenShift Application Services API.
-
You have created a Kafka instance.
-
To create a Kafka topic with default values, copy and paste the
rhoas_topic
resource shown in the following example into themain.tf
file. This example uses thetopic
identifier and creates themy-topic
Kafka topic. Because you have already created the Kafka instance, Terraform can check dependencies for this new topic resource and knows the Kafka ID when you run this example resource.Examplerhoas_topic
resource with default valuesresource "rhoas_topic" "topic" { name = "my-topic" partitions = 1 kafka_id = rhoas_kafka.instance.id }
-
Save your changes.
-
Apply the changes you made to your Terraform provider configuration.
$ terraform apply
In the terminal, Terraform displays a message that
rhoas_topic.my-topic
will be created. -
When you’re ready to create your topic, type yes.
-
To verify Terraform successfully created your topic, in your web browser, open the Topics page of the Streams for Apache Kafka web console.
In Terraform, you can use data sources to obtain information about resources external to Terraform, defined by another separate Terraform configuration, or modified by functions using the data block. Apply data sources in the same way that you add resources to the configuration file. The following rhoas_kafkas
data source example provides a list of the Kafka instances accessible to your organization in OpenShift Streams for Apache Kafka.
rhoas_kafkas
data sourceterraform {
required_providers {
rhoas = {
source = "registry.terraform.io/redhat-developer/rhoas"
version = "0.4.0"
}
}
}
provider "rhoas" {}
data "rhoas_kafkas" "all" {
}
output "all_kafkas" {
value = data.rhoas_kafkas.all
}