Code to deploy a solution to aggregate the findings from Security Hub from different accounts to a centralized account using Amazon Kinesis Data Firehose and AWS Glue
This Terraform module creates an Amazon Kineses Data Firehose ready to receive Security Hub findings from Event Bridge. After Amazon Firehose will use AWS Glue to get the data model mapping for the Security Hub findings then writes the data to an S3 Bucket in Parquet format. The module includes all needed roles for the proper access of the services.
This Terraform module creates an Event Bridge rule that captures the Security Hub findings and forwards it to the Event Bridge in the Admin module.
The following diagram describes the full architecture.
- A Security Hub finding is updated/modified
- Security Hub sends an event to the default EventBridge bus in the member account
- The EventBridge rule is subscribed to capture those findings and forward them to the dedicated EventBridge bus in the admin account
- The EventBridge rule is subscribed to capture any event in the dedicated bus and it will consequently trigger Amazon Firehose
- Firehose retrieves the data model mapping from AWS Glue to convert the raw data into Parquet format
- Firehose writes the data into the S3 Bucket
Pay attention: Both modules are meant to be used as standalone modules. They have to be deployed independetly to the relevant AWS accounts
- AWS Security Hub: This module expects that AWS Security Hub is already up and running in the region where the EventBridge rule will be deployed. The setup can be easily carried out by following the official documentation.
- terraform: 1.2.7 Reference
The available variables are described in variables.tf file for each module.
In case you would like to run the module as a standalone, you will need to initialize the creation of an S3 bucket and a DynamoDB table to track the terraform states.
cd tf-state
terraform init && terraform apply
cd ..
terraform init -backend-config=config/backend.conf
Option 1: You can use the following samples to utilize the modules within your code:
module "reporting-admin-standalone" {
source = "./modules/reporting-admin-module"
name_prefix = local.name_prefix
custom_tags = local.tags
firehose_logs_retention_days = "1"
allowed_member_accounts = ["11111111","22222222"] //Add member accounts here
}
module "reporting-member-standalone" {
source = "./modules/reporting-member-single-module"
name_prefix = local.name_prefix
admin_events_bus_arn = <<ADD_HERE_THE_ARN_FOR_ADMIN_EVENT_BUS>>
}
Please have a look inside inside variables.tf for all the possible options.
Option 2: Alternatively, if you have Terraform installed on your workstation, you can deploy the example by executing:
export AWS_PROFILE=<profile>
export AWS_DEFAULT_REGION=eu-west-1
terraform plan -target=module.reporting-admin-module -var region=$AWS_DEFAULT_REGION -var profile=$AWS_PROFILE
terraform apply -target=module.reporting-admin-module -var region=$AWS_DEFAULT_REGION -var profile=$AWS_PROFILE
terraform plan -target=module.reporting-member-module -var region=$AWS_DEFAULT_REGION -var profile=$AWS_PROFILE
terraform apply -target=module.reporting-member-module -var region=$AWS_DEFAULT_REGION -var profile=$AWS_PROFILE
Pay attention: you should first modify the
AWS_DEFAULT_REGION
in accordance to your requirements.
Using the following script will attempt to test the process by using fake findings:
source ./modules/reporting-admin-module/sh_test_event.cli
Use with caution:
terraform destroy -var region=$AWS_DEFAULT_REGION -var profile=$AWS_PROFILE
See CONTRIBUTING for more information.
This project is licensed under the Apache-2.0 License.
=======