Replies: 1 comment
-
Hi @pkoteswar , it looks like you are using version To address the issue, I would suggest incrementally updating your module version if you already have infrastructure deployed with the current version. This is to ensure that you are able to work thru any breaking changes that have been introduced over the past 2.5 years. If this is net new infrastructure that is being deployed, then the suggestion would be to update to the latest version of the module prior to deploying the resources. Hope this helps! |
Beta Was this translation helpful? Give feedback.
-
We are currently in the process of setting up our first EKS cluster in the AWS China Region (cn-north-1). We've been utilizing our existing Terragrunt modules for this task. However, we've encountered an issue while configuring the Cloudflare resources. It appears that we are facing problems with the IAM policy ARN prefix. In the China region, IAM policies use the ARN prefix "arn:aws-cn," while our Cloudflare module is hardcoded with the ARN prefix "arn:aws," causing conflicts.
We attempted to resolve this issue by utilizing the "data "aws_partition" "current" {}" approach, but unfortunately, it did not yield the desired results.
To provide you with more context and details, we have attached screenshots of our Terraform/Terragrunt calling and assembly modules, along with the Terraform (TF) output.
We kindly request your assistance and guidance on how to correctly configure the ARN prefix for AWS China.
Thank you for your support.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
CONFIGURE CLOUDTRAIL TO LOG EVERY API CALL IN THIS AWS ACCOUNT
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
---------------------------------------------------------------------------------------------------------------------
CONFIGURE OUR AWS CONNECTION
---------------------------------------------------------------------------------------------------------------------
provider "aws" {
The AWS region in which all resources will be created
region = "cn-north-1"
Provider version 2.X series is the latest, but has breaking changes with 1.X series.
version = "~> 4.29.0"
Only these AWS Account IDs may be operated on by this template
allowed_account_ids = [var.aws_account_id]
}
---------------------------------------------------------------------------------------------------------------------
CONFIGURE REMOTE STATE STORAGE
---------------------------------------------------------------------------------------------------------------------
terraform {
The configuration for this backend will be filled in by Terragrunt
backend "s3" {}
Only allow this Terraform version. Note that if you upgrade to a newer version, Terraform won't allow you to use an
older version, so when you upgrade, you should upgrade everyone on your team and your CI servers all at once.
#required_version = "= 0.12.31"
}
---------------------------------------------------------------------------------------------------------------------
ENABLE CLOUDTRAIL
Setup a CloudTrail stream to track all API requests and store them in the specified S3 Bucket.
To understand the meaning of each property, see the vars.tf file at https://goo.gl/kRx6WN.
---------------------------------------------------------------------------------------------------------------------
module "cloudtrail" {
source = "git::[email protected]:gruntwork-io/module-security.git//modules/cloudtrail?ref=v0.44.6"
#aws_account_id = var.aws_account_id
cloudtrail_trail_name = var.cloudtrail_trail_name
s3_bucket_name = var.s3_bucket_name
num_days_after_which_archive_log_data = var.num_days_after_which_archive_log_data
num_days_after_which_delete_log_data = var.num_days_after_which_delete_log_data
Note that users with IAM permissions to CloudTrail can still view the last 7 days of data in the AWS Web Console!
kms_key_user_iam_arns = var.kms_key_user_iam_arns
kms_key_administrator_iam_arns = var.kms_key_administrator_iam_arns
allow_cloudtrail_access_with_iam = var.allow_cloudtrail_access_with_iam
cloudwatch_logs_group_name = var.cloudwatch_logs_group_name
#additional_bucket_policy_statements = var.additional_bucket_policy_statements
If you're writing CloudTrail logs to an existing S3 bucket in another AWS account, set this to true
s3_bucket_already_exists = var.s3_bucket_already_exists
If external AWS accounts need to write CloudTrail logs to the S3 bucket in this AWS account, provide those
external AWS account IDs here
external_aws_account_ids_with_write_access = var.external_aws_account_ids_with_write_access
force_destroy = var.force_destroy
}
data "aws_partition" "current" {}
output:
Tracked in ticket #110504
Beta Was this translation helpful? Give feedback.
All reactions