Skip to content

Latest commit

 

History

History

iac_tools_rhoas

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 

Automating deployment and management of Kafka instances in OpenShift Streams for Apache Kafka

As a system administrator of applications and services, you can use Infrastructure as Code (IaC) tools to automate deployment and management of architectures.

This guide describes how to use different IaC tools to provision and manage Kafka instances in OpenShift Streams for Apache Kafka. To interact with Kafka instances, the IaC tools use two REST APIs that are available in Streams for Apache Kafka - a management API and an instance API.

For more information about creating and setting up Kafka instances, see Getting started with OpenShift Streams for Apache Kafka.

Ansible

Ansible is an open source configuration management and automation tool.

Ansible uses modules to execute tasks. The output of one module can be the input for the next one or they can be independent of each other.

You can combine one or more Ansible modules to make a play. Combine two or more plays to create an Ansible playbook, a list of sequential tasks (written in YAML) that automatically executes when you run the ansible-playbook command. You can use an Ansible playbook to describe the desired state of your infrastructure and then have Ansible provision it.

An Ansible role is a special kind of playbook that is fully self-contained and portable with the tasks, variables, configuration templates, and other supporting files that are needed to complete your chosen tasks. Multiple roles can exist inside a collection, allowing easy sharing of content through Automation Hub and Ansible Galaxy.

Installing the OpenShift Application Services Ansible collection

The following procedure shows how to install the OpenShift Application Services (RHOAS) Ansible collection allowing you to fully manage your Kafka environment with Ansible.

Prerequisites
  • Python 3.9 or later is installed.

  • Ansible 2.9 or later is installed.

  • Some additional dependencies required by this collection are installed. To install these, enter the following commands:

    $ pip install rhoas-sdks
    $ pip install python-dotenv
    Note
    Use the pip install rhoas-sdks --force reinstall command to ensure that the most up to date versions of the SDKs are installed.
Procedure
  1. Open a new terminal window.

  2. Create a Python virtual environment.

    $ python3 -m venv rhoas
  3. Activate the virtual environment.

    $ . rhoas/bin/activate
  4. Install the RHOAS Ansible collection.

    $ ansible-galaxy collection install rhoas.rhoas

    By default, the ansible-galaxy collection install command uses https://galaxy.ansible.com as the Galaxy server URL. The URL is specified as the value of the GALAXY_SERVER parameter in the ansible.cfg configuration file.

    A message states that the RHOAS collection was installed successfully to the following path: home/<usr>/.ansible/collections/ansible_collections/rhoas/rhoas. All modules for the collection can be found in this location.

    When installed, you can reference collection content by the fully qualified collection name (FQCN) of the collection. The command in a task needs to include the FQCN of the module so it always points to the intended collection. For example, the FQCN for the create_kafka command is rhoas.rhoas.create_kafka.

    Note

    To find documentation for each module, open a terminal window, enter ansible-doc rhoas.rhoas.<module_name> and replace <module_name> with the name of the module in the terminal. The following example outputs the documentation for the create_kafka module. The documentation shows required parameters (denoted by =) and optional parameters (denoted by -) for the module, as well as examples of how to use the module and the output. If you do not pass in any values for optional fields, Ansible uses the default values from the Kafka APIs.

    $ ansible-doc rhoas.rhoas.create_kafka

Using the OpenShift Application Services Ansible collection

You can create, view, configure, and delete the following OpenShift Streams for Apache Kafka resources with the OpenShift Application Services (RHOAS) collection:

  • Kafka instances

  • Service accounts

  • Kafka topics

  • Access Control Lists (ACLs)

Prerequisites
  • You have a Red Hat account.

  • You have an offline token used to authenticate the Ansible modules with the OpenShift Application Services API.

Note
The offline token is a refresh token with no expiry and can be used by non-interactive processes to provide an access token for OpenShift Application Services to authenticate the user. The token is an OpenShift offline token and you can find it at https://cloud.redhat.com/openshift/token.
Procedure
  1. In your IDE, create a new environment variables (.env) file.

  2. In the .env file, add the following variables required by the Ansible collection:

    API_BASE_HOST

    This is the base URL for the API. For example, https://api.openshift.com.

    SSO_BASE_HOST

    This is the base URL for the Red Hat single sign-on (SSO) service. For example, https://sso.redhat.com/auth/realms/redhat-external.

  3. Save the file in the root directory of the Ansible collection.

    Note
    If you do not explicitly define the environment variables, the collection uses the URLs shown in the preceding step.
  4. You can run a module in a terminal window to perform a single task. For example, the following command shows how to run a module that creates a Kafka instance. Replace <OFFLINE_TOKEN> with your OpenShift offline token.

    Example create_kafka module
    $ ansible localhost -m rhoas.rhoas.create_kafka -a 'name=unique-kafka-name billing_model=standard cloud_provider=aws plan="developer.x1" region="us-east-1" openshift_offline_token=<OFFLINE_TOKEN>'

    Ansible runs the rhoas.rhoas.create_kafka task in the terminal and creates the instance.

    You can also create a playbook of sequential tasks and use Ansible to automatically execute those tasks. The steps in the next section show you how to create and run an example playbook.

Creating a playbook

The example playbook in this section includes modules for creating a Kafka instance, topic, and service account, and for assigning permissions to the service account. You also configure modules to delete the resources that you previously created.

  1. To open the example playbook used in this section, use your IDE to navigate to the rhoas folder in the .ansible directory.

  2. Open the rhoas_test example playbook.

    Note

    You can also open a web browser to view the rhoas_test playbook.

  3. Inspect the contents of the example playbook. In particular, observe that the playbook has modules for certain tasks. You will use these modules to create a new playbook that performs the following tasks:

    • Creating and deleting a Kafka instance

    • Creating and deleting a service account

    • Creating Access Control List (ACL) permission bindings

    • Creating, updating, and deleting a topic

      Note
      The playbook uses your offline token to authenticate with the Kafka Management API. If you do not specify the token as an argument for a given task, the module attempts to read it from the OFFLINE_TOKEN environment variable.
      Note

      The example playbook used in this section includes comments that indicate how to directly specify values rather than fetching them dynamically. For example, to specify a Kafka instance ID, a comment in the playbook states that you can include the following line:

      kafka_id: <kafka_id>
  4. In your IDE, create a new playbook file and save it as rhoas_kafka.yml in the rhoas directory.

  5. At the start of the new playbook, you define the name of the playbook and the group of hosts on which to run it. Copy the following example and paste it into the rhoas_kafka.yml file. In this example, the host is a local host and the connection is local because you are in a virtual environment.

    Example first section for rhoas_kafka playbook
    - name: RHOAS kafka
      hosts: localhost
      gather_facts: false
      connection: local
      tasks:

Creating a Kafka instance

To manage a Kafka instance using Ansible, you can use the create_kafka module and run it as part of a playbook or as a single module.

Prerequisites
  • You have a Red Hat account.

  • You have an offline token that authenticates the Ansible modules with the OpenShift Application Services API.

Procedure
  1. To add a module that creates a new Kafka instance to a playbook, copy the create_kafka module shown in the following example and paste it into the tasks: section of your rhoas_kafka.yml file.

    Example create_kafka module
    - name: Create kafka
        rhoas.rhoas.create_kafka:
          name: "kafka-name"
          instance_type: "x1"
          billing_model: "standard"
          cloud_provider: "aws"
          region: "us-east-1"
          plan: "developer.x1"
          billing_cloud_account_id: "123456789"
          openshift_offline_token: "OPENSHIFT_OFFLINE_TOKEN"
        register:
          kafka_req_resp
  2. In the name field, enter a name for the Kafka instance.

  3. In the billing_cloud_account_id, enter the billing cloud account ID.

  4. In the openshift_offline_token field, enter your OpenShift offline token.

    All other information for the instance is provided by the Kafka APIs.

    When you run the create_kafka_ module as part of the playbook at the end of these steps, Ansible saves the output of that command in a variable in the register field. In the preceding example, Ansible saves the created Kafka instance as kafka_req_resp.

Creating a service account

To connect your applications or services to a Kafka instance in Streams for Apache Kafka, you must first create a service account with credentials.

Prerequisites
  • You have an offline token that authenticates the Ansible modules with the OpenShift Application Services API.

  • You have created a Kafka instance.

Procedure
  1. To add a module that creates a service account, copy the create_service_account module shown in the following example and paste it into the rhoas_kafka.yml file.

    Example create_service_account module
    - name: Create Service Account
        create_service_account:
          name: "service-account-name"
          description: "This is a description of the service account"
          openshift_offline_token: "OPENSHIFT_OFFLINE_TOKEN"
        register:
          srvce_acc_resp_obj
  2. Enter values for the name, short description, and openshift_offline_token fields.

    When you run the create_service_account module as part of the playbook at the end of these steps, Ansible populates the generated service account credentials in the client_id and client_secret fields in the terminal after it creates the service account.

Creating Access Control List permissions

After you create a service account to connect to a Kafka instance, you must also set the appropriate level of access for that new account in an Access Control List (ACL) for the Kafka instance.

Prerequisites
  • You have an offline token that authenticates the Ansible modules with the OpenShift Application Services API.

  • You have created a Kafka instance.

  • You understand how Access Control Lists (ACLs) enable you to manage how user accounts and service accounts can access the Kafka resources that you create. See Managing account access in OpenShift Streams for Apache Kafka.

Procedure
  1. To create Access Control List (ACL) permissions for the service account and bind that ACL to the Kafka instance, copy the create_kafka_acl_binding module shown in the following example and paste it in your rhoas_kafka.yml file.

    Example create_kafka_acl_binding module
    - name: Create kafka ACL Service Binding
        rhoas.rhoas.create_kafka_acl_binding:
          kafka_id: "{{ kafka_req_resp.kafka_id }}"
          # To hard code the kafka_id, uncomment and use the following line:
          # kafka_id: "KAFKA_ID"
          principal: " {{ srvce_acc_resp_obj['client_id'] }}"
          # To hard code the principal_id, uncomment and use the following line:
          # principal: "PRINCIPAL_ID"
          resource_name: "topic-name"
          resource_type: "Topic"
          pattern_type: "PREFIXED"
          operation_type: "all"
          permission_type: "allow"
          openshift_offline_token: "OPENSHIFT_OFFLINE_TOKEN"
        register: kafka_acl_resp
  2. To directly specify a Kafka instance ID, enter a value in the kafka_id field. Otherwise, Ansible gets the Kafka ID from the kafka_req_resp.id variable.

  3. In the openshift_offline_token field, enter your OpenShift offline token.

  4. Consider whether you need to specify your own value for any of the fields in the following list. These fields must all have values in an ACL binding module.

    principal

    The user or service account that this binding applies to. This example uses the service account client ID.

    resource_name

    The Kafka resource that you are granting access to. This example specifies the Kafka topic that you create in the playbook.

    resource_type

    The type of resource you grant access to. This example uses Topic.

    pattern_type

    The type of pattern of the ACL. This example uses the PREFIXED pattern type meaning that Kafka will try to match the prefix of the resource name with the resource specified in the ACL.

    operation_type

    The type of operation (an action performed on a resource) that is allowed for the given user on this module.

    permission_type

    Whether permission is given to the user or taken away.

Configuring a Kafka topic

After you create a Kafka instance, you can create Kafka topics to start producing and consuming messages in your applications and services.

Prerequisites
  • You have an offline token that authenticates the Ansible modules with the Kafka Management API.

  • You have created a Kafka instance.

Procedure
  1. To create a Kafka topic, copy the contents of the create_kafka_topic module shown in the following example and paste it into the rhoas_kafka.yml file.

    Example create_kafka_topic module
    - name: Create Kafka Topic
        create_kafka_topic:
          topic_name: "kafka-topic-name"
          kafka_id: "{{ kafka_req_resp.id }}"
          # To hard code the kafka_id, uncomment and use the following line:
          # kafka_id: "KAFKA_ID"
          partitions: 1
          retention_period_ms: "86400000"
          retention_size_bytes: "1073741824"
          cleanup_policy: "compact"
          openshift_offline_token: "OPENSHIFT_OFFLINE_TOKEN"
        register:
          create_topic_res_obj
  2. To directly specify a Kafka instance ID, enter a value in the kafka_id field. Otherwise, Ansible gets the Kafka ID from the kafka_req_resp.id variable.

  3. In the openshift_offline_token field, enter your OpenShift offline token.

  4. To update the configuration of the topic, copy the update_kafka_topic module shown in the following example and paste it into the rhoas_kafka file. In the following example, the cleanup policy has been updated from compact to delete by replacing "compact" with "delete" in the cleanup_policy field.

    Example update_kafka_topic module
    - name: Update Kafka Topic
        update_kafka_topic:
          topic_name: "kafka-topic-name"
          kafka_id: "{{ kafka_req_resp.id }}"
          # To hard code the kafka_id, uncomment and use the following line:
          # kafka_id: "KAFKA_ID"
          partitions: 1
          retention_period_ms: "86400000"
          retention_size_bytes: "1073741824"
          cleanup_policy: "delete"
          openshift_offline_token: "OPENSHIFT_OFFLINE_TOKEN"
        register:
          update_topic_res_obj
  5. To directly specify a Kafka instance ID, enter a value in the kafka_id field. Otherwise, Ansible gets the Kafka ID from the kafka_req_resp.id variable.

  6. (Optional) You can modify the values in the retention_period_ms and retention_size_bytes fields instead of accepting the default values.

Deleting a Kafka topic

To delete a topic, use the delete_kafka_topic module.

  1. Copy the delete_kafka_topic module shown in the following example and paste it into the rhoas_kafka.yml file.

    Example delete_kafka_topic module
    - name: Delete Kafka Topic
       rhoas.rhoas.delete_kafka_topic:
         topic_name: "KAFKA_TOPIC_NAME"
          kafka_id: "{{ kafka_req_resp_obj['kafka_id'] }}"
          # To hard code the kafka_id, uncomment and use the following line:
          # kafka_id: "KAFKA_ID"
         openshift_offline_token: "OPENSHIFT_OFFLINE_TOKEN"
  2. In the openshift_offline_token field, enter your OpenShift offline token.

  3. To directly specify a Kafka instance ID, enter a value in the kafka_id field. Otherwise, Ansible gets the Kafka ID from the kafka_req_resp.id variable.

Deleting a service account

To delete a service account, use the delete_service_account_by_id module.

Prerequisites
  • You have created a service account.

Procedure
  1. To delete the service account, copy the delete_service_account_by_id module shown in the following example and paste it into the rhoas_kafka.yml file.

  2. To directly specify a service account ID, enter a value in the service_account_id field. Otherwise, Ansible gets the service account ID from the srvce_acc_resp_obj variable.

    Example deleting_service_account_by_id module
    - name: Delete Service Account
       rhoas.rhoas.delete_service_account_by_id:
       # service_account_id: "service_account_id"
      service_account_id: "{{ srvce_acc_resp_obj['client_id'] }}"
    
      # openshift_offline_token: "OFFLINE_TOKEN"

Deleting a Kafka instance

To delete a Kafka instance, use the delete_kafka_by_id module.

Prerequisites
  • You have created a Kafka instance.

Procedure
  1. To deprovision and delete the Streams for Apache Kafka instance, copy the delete_kafka_by_id module shown in the following example and paste it into the rhoas_kafka.yml file.

    Example delete_kafka_by_id module
    - name: Delete kafka instance by ID
        rhoas.rhoas.delete_kafka_by_id:
         kafka_id: "{{ kafka_req_resp_obj['kafka_id'] }}"
         openshift_offline_token: "offline_token"
  2. In the openshift_offline_token field, enter your OpenShift offline token.

Running the playbook

When you have finished creating the resources, run the playbook.

Procedure
  1. Save any changes you have made to the the example rhoas_test playbook.

  2. Open a terminal and enter the following command to run the playbook:

    $ ansible-playbook rhoas_kafka.yml

    The playbook runs through the tasks sequentially, generating output in the terminal window. When finished, Ansible displays a PLAY RECAP message stating that all 8 tasks have an ok status, meaning they have all run successfully.

Terraform

HashiCorp Terraform is an Infrastructure as Code (Iac) tool that enables you to build and change infrastructure safely and efficiently through human-readable configuration files that you can create versions of, reuse, and share. You can then use a consistent workflow to provision and manage all of your infrastructure throughout its lifecycle.

Installing the OpenShift Application Services Terraform provider

You can fully manage your Kafka environment through your Terraform system using the OpenShift Application Services (RHOAS) Terraform provider, which is available in the official Terraform provider registry.

You can create, view, configure, and delete the following Streams for Apache Kafka resources:

  • Kafka instances

  • Service accounts

  • Kafka topics

  • Access Control Lists (ACLs)

Prerequisites
  • You have a Red Hat account.

  • Terraform v1.3.4 or later is installed.

Procedure
  1. In your web browser, navigate to the OpenShift Application Services (RHOAS) Terraform provider.

  2. In the upper-right corner of the RHOAS Terraform provider registry, click Use Provider.

    A pane opens that shows the configuration you need to use the RHOAS Terraform provider.

  3. In the pane that opened, copy the configuration shown. The following lines show an example of the configuration.

    Example RHOAS provider configuration
    terraform {
      required_providers {
        rhoas = {
          source = "redhat-developer/rhoas"
          version = "0.4.0"
        }
      }
    }
    
    provider "rhoas" {
      #configuration options
    }
  4. In your IDE, open a new file and paste the configuration you copied. You can specify configuration options in the provider section.

  5. Save the file as a Terraform configuration file called main.tf in a directory of your choice. This procedure uses a directory called Terraform.

  6. Open a terminal and navigate to the Terraform directory.

    $ cd Terraform
  7. Enter the following command. This command initializes the working directory that contains Terraform configuration files and installs any required plug-ins.

    $ terraform init

    When the Terraform provider has been initialized, you see a confirmation message.

Using the OpenShift Application Services Terraform provider

Resources are the most important element in the Terraform language. Each resource block in a Terraform provider describes one or more infrastructure objects. For OpenShift Streams for Apache Kafka, such infrastructure objects might include Kafka instances, service accounts, Access Control Lists (ACLs), and topics. The procedures that follow show what resources you can add to your Terraform configuration file to create a Kafka instance and its associated resources such as service accounts and topics.

Creating a Kafka instance

To manage a Kafka instance using Terraform, use the rhoas_kafka resource.

Prerequisites
  • You have a Red Hat account.

  • You have an offline token that authenticates the Terraform resources with the OpenShift Application Services API.

Note

The offline token is a refresh token with no expiry and can be used by non-interactive processes to provide an access token for OpenShift Application Services to authenticate the user. The token is an OpenShift offline token and you can find it at https://cloud.redhat.com/openshift/token. Because the offline token is a sensitive value that varies between environments it is best specified as an OFFLINE_TOKEN environment variable when running terraform apply in a terminal. To set this environment variable, enter the following command in a terminal window, replacing <offline_token> with the value of the offline token:

export OFFLINE_TOKEN=<offline_token>
Procedure
  1. Open the main.tf file in your IDE for editing.

  2. Copy the rhoas_kafka resource shown in the following example and paste it into the main.tf file after the provider configuration. This example uses the "my-instance" identifier and creates a Kafka instance called my-instance.

    Note
    In the following examples, the identifier and the name of the infrastructure object are the same for demonstration purposes only. You can choose different values for each field.
    Example rhoas_kafka resource
    resource "rhoas_kafka" "my-instance" {
      name = "my-instance"
      plan = "developer.x1"
      billing_model = "standard"
    }
      output "bootstrap_server_my-instance" {
        value = rhoas_kafka.my-instance.bootstrap_server_host
    }
  3. Save your changes.

  4. Open a terminal and apply the changes you made to your Terraform provider configuration.

    $ terraform apply

    In the terminal, Terraform displays a message that rhoas_kafka.my-instance will be created. Terraform automatically sets values for cloud provider and region in the terminal. All other information for the instance is provided by the Kafka APIs.

  5. When you’re ready to create your instance, type yes. The generated bootstrap server URL appears in the terminal as an output.

    Note
    Running terraform apply for the first time also creates the Terraform state file. Terraform logs information about the resources it has created in this state file. This allows Terraform to know which resources are under its control and when to update and delete them. The Terraform state file is named terraform.tfstate by default and is kept in the same directory where Terraform is run. Sensitive information such as the offline token, client ID, and client secret can be found in the terraform.tfstate file. Running terraform apply again updates this file.
  6. To verify Terraform successfully created your Kafka instance, in your web browser, open the Kafka Instances page of the Streams for Apache Kafka web console.

Creating a service account

To connect your applications or services to a Kafka instance in Streams for Apache Kafka, you must first create a service account with credentials.

Prerequisites
  • You have an offline token that authenticates the Terraform resources with the OpenShift Application Services API.

  • You have created a Kafka instance.

Procedure
  1. To create a service account, copy and paste the rhoas_service_account resource shown in the following example into the main.tf file. This example uses the "my-service-account" identifier and creates a service account called my-service-account.

    Example rhoas_service_account resource
    resource "rhoas_service_account" "my-service-account" {
      name        = "my-service-account"
      description = "<description of service account>"
    }
    
    output "client_id" {
      value = rhoas_service_account.my-service-account.client_id
    }
    
    output "client_secret" {
      value     = rhoas_service_account.my-service-account.client_secret
      sensitive = true
    }
  2. Save your changes.

  3. Apply the changes you made to your Terraform provider configuration.

    $ terraform apply

    In the terminal, Terraform displays a message that rhoas_service_account.my-service-account will be created.

  4. When you’re ready to create your service account, type yes. The generated client ID appears in the terminal as an output. The client secret does not appear because it is marked as a sensitive value.

  5. To verify Terraform successfully created your service account, in your web browser, open the Service Accounts page of the Streams for Apache Kafka web console.

Creating Access Control List permissions

After you create a service account to connect to a Kafka instance, you must also set the appropriate level of access for that new account in an Access Control List (ACL) for the Kafka instance.

Prerequisites
  • You have an offline token that authenticates the Terraform resources with the OpenShift Application Services API.

  • You have created a Kafka instance.

  • You understand how Access Control Lists (ACLs) enable you to manage how user accounts and service accounts can access the Kafka resources that you create. See Managing account access in OpenShift Streams for Apache Kafka.

Procedure
  1. To create Access Control List (ACL) permissions with some default values, copy and paste the rhoas_acl resource shown in the following example into the main.tf file. This example uses the "acl" identifier.

    Example ACL binding resource
    resource "rhoas_acl" "acl" {
      kafka_id = rhoas_kafka.my-instance.id
      resource_type = "TOPIC"
      resource_name = "my-topic"
      pattern_type = "LITERAL"
      principal = rhoas_service_account.my-service-account.client_id
      operation_type = "ALL"
      permission_type = "ALLOW"
    }
  2. Consider whether you need to specify your own value for any of the fields in the following list. These fields must all have values in an ACL binding resource.

    resource_type

    The type of resource you grant access to. This example uses “TOPIC”.

    resource_name

    The name of the Kafka resource you grant access to. This example uses the name that is passed when creating the topic.

    pattern_type

    The type of pattern of the ACL. This example uses the LITERAL pattern type meaning that Kafka will try to match the match the full resource name (the topic) with the resource specified in the ACL.

    principal

    The user or service account that this binding applies to. This example uses the service account client ID.

    operation_type

    The type of operation (an action performed on a resource) that is allowed for the given user on this resource.

    permission_type

    Whether permission is given or taken away.

  3. Save your changes.

  4. Apply the changes you made to your Terraform provider configuration.

    $ terraform apply

    In the terminal, Terraform displays a message that rhoas_acl.acl. will be created.

  5. When you’re ready to set your permissions, type yes.

Creating a Kafka topic

After you create a Kafka instance, you can create Kafka topics to start producing and consuming messages in your applications and services.

Prerequisites
  • You have an offline token that authenticates the Terraform resources with the OpenShift Application Services API.

  • You have created a Kafka instance.

Procedure
  1. To create a Kafka topic with default values, copy and paste the rhoas_topic resource shown in the following example into the main.tf file. This example uses the topic identifier and creates the my-topic Kafka topic. Because you have already created the Kafka instance, Terraform can check dependencies for this new topic resource and knows the Kafka ID when you run this example resource.

    Example rhoas_topic resource with default values
    resource "rhoas_topic" "topic" {
    		name = "my-topic"
    		partitions = 1
    		kafka_id = rhoas_kafka.instance.id
    	}
  2. Save your changes.

  3. Apply the changes you made to your Terraform provider configuration.

    $ terraform apply

    In the terminal, Terraform displays a message that rhoas_topic.my-topic will be created.

  4. When you’re ready to create your topic, type yes.

  5. To verify Terraform successfully created your topic, in your web browser, open the Topics page of the Streams for Apache Kafka web console.

Deleting resources

You can delete the resources you have created when you have finished using them.

Prerequisites
  • You have created Terraform resources.

Procedure
  • To delete the created resources, enter the following command:

    $ terraform destroy

Data sources

In Terraform, you can use data sources to obtain information about resources external to Terraform, defined by another separate Terraform configuration, or modified by functions using the data block. Apply data sources in the same way that you add resources to the configuration file. The following rhoas_kafkas data source example provides a list of the Kafka instances accessible to your organization in OpenShift Streams for Apache Kafka.

Example rhoas_kafkas data source
terraform {
  required_providers {
    rhoas = {
      source  = "registry.terraform.io/redhat-developer/rhoas"
      version = "0.4.0"
    }
  }
}

provider "rhoas" {}

data "rhoas_kafkas" "all" {
}

output "all_kafkas" {
  value = data.rhoas_kafkas.all
}