-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Failed to query available provider packages and connect to RDS on terraform cloud #22
Comments
Can you try changing
to
Terraform's provider naming is a bit annoying when it comes to hyphens. |
@fabiobsantosprogrow glad that moved you a step further. The issue you are hitting might be happening if Terraform thinks it is done with the Do you have this in your TF code resource "awsssmtunnels_remote_tunnel" "rds" {
...
}
// NOTE: The import is needed for the first plan, otherwise TF will hold off on the create until the Apply phase.
// The import allows it to run in the very first plan.
import {
id = "<target>|<remote host>|<remote port>|<local port>|127.0.0.1|<region>"
to = awsssmtunnels_remote_tunnel.rds
}
// Keepalive prevents TF from prematurely sending a shutdown command to the Provider process (which would kill the tunnel)
data "awsssmtunnels_keepalive" "rds" {
depends_on = [
awsssmtunnels_remote_tunnel.rds,
... every MySQL resource
]
}
// Add depends_on so that TF doesn't do anything with these resources until the tunnel is up
resource "mysql_...." "..." {
...
depends_on = [awsssmtunnels_remote_tunnel.rds]
} Let me know if that helps, or if you continue to run into issues. |
@fabiobsantosprogrow do you have something like this in your TF code?
Would it be possible for you to share a small TF code example that exhibits this problem (with stubbed names/settings)? That would help me conceptualize how you have it set up. Also are you opening multiple SSM tunnels? |
I did some debugging with the multi-tunnel scenario and I found that for reliability, one should have a provider per tunnel (due to the AWS session-manager-plugin issuing os.Exit which kills the provider even though other tunnels are open). provider "awsssmtunnels" {
alias = "rds" // NOTE: The alias here
region = "us-east-1"
shared_config_files = [var.tfc_aws_dynamic_credentials.default.shared_config_file]
}
resource "awsssmtunnels_remote_tunnel" "rds" {
provider = awsssmtunnels.rds // NOTE: The provider alias reference here
refresh_id = "one"
target = var.bastion_instance_id
remote_host = var.endpoint
remote_port = 5432
region = "us-east-1"
local_port = 16534
}
import {
id = "${var.bastion_instance_id}|${var.endpoint}|5432|16534|127.0.0.1|us-east-1"
to = awsssmtunnels_remote_tunnel.rds
}
data "awsssmtunnels_keepalive" "rds" {
provider = awsssmtunnels.rds // The provider alias reference here
depends_on = [
...
awsssmtunnels_remote_tunnel.rds,
]
} You can then have another provider like
Running multiple aliased providers this way will spin up separate processes, one per provider and that isolation seems to help it run more reliably. |
@fabiobsantosprogrow you raise a good question about deletion of a resource (such as a user). I'm not sure that this provider can easily support that. I'll have to do some experimenting. I did want to highlight this issue in Terraform (hashicorp/terraform#8367) which if implemented could make this provider simpler to configure and make resource deletion support work properly. |
When I use the provider example I found this error:
The configuration used for this test:
When I change to:
Result:
When I try to create awsssmtunnels_remote_tunnel using this example the same error happens again.
Configuration:
What is the problem with this configuration?
The text was updated successfully, but these errors were encountered: