-
-
Notifications
You must be signed in to change notification settings - Fork 331
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Docker Autoscaler not working in AWS #1172
Comments
Issue seems
When I manually extracted the |
I have a working configuration here. Let me check tomorrow. I also have some problems with the Fleeting plugin, but no idea, where to post them. |
runner_worker = {
ssm_access = true
request_concurrency = 1
type = "docker-autoscaler"
}
runner_worker_docker_autoscaler = {
fleeting_plugin_version = "1.0.0"
max_use_count = 50
}
runner_worker_docker_autoscaler_ami_owners = ["my-account-id"]
runner_worker_docker_autoscaler_ami_filter = {
name = ["gitlab_runner_fleeting*"]
}
runner_worker_docker_autoscaler_instance = {
root_size = 72
root_device_name = "/dev/xvda"
}
runner_worker_docker_autoscaler_asg = {
subnet_ids = var.subnet_ids
types = var.runner_settings.worker_instance_types
enable_mixed_instances_policy = true
on_demand_base_capacity = 1
on_demand_percentage_above_base_capacity = 0
max_growth_rate = 10
}
runner_worker_docker_autoscaler_autoscaling_options = var.runner_settings.worker_autoscaling != null ? var.runner_settings.worker_autoscaling : [
{
periods = ["* * * * *"]
timezone = "Europe/Berlin"
idle_count = 0
idle_time = "30m"
scale_factor = 2
},
{
periods = ["* 7-19 * * mon-fri"]
timezone = "Europe/Berlin"
idle_count = 3
idle_time = "30m"
scale_factor = 2
}
] |
@kayman-mk |
It's working like a charm here. That's strange. EDIT: May be the AMI used for the workers? |
@kayman-mk Are you baking the AMIs with SSH keys? If so, that explains. |
I'm using |
I think the issue is somewhere here - terraform-aws-gitlab-runner/main.tf Lines 80 to 83 in 1d5791a
key_path under [runners.autoscaler.connector_config]
|
Same logic is needed for |
|
I've faced the same issue, but usage of AMI with preinstalled Docker for worker nodes helped. It seems that fleeting plugin on agent node connects directly to docker on worker nodes. |
@pokidovea Worker AMI has preinstalled docker in my case. |
@chovdary123 I'm encountering the same error with a similar configuration. It looks like the |
@chovdary123, @kayman-mk ,i can confirm that public/private keys variables not set when fleet is disabled the following issues regarding the key settings:
|
I had exactly same problem and for me To easily patch your code just add this to your module inputs: runner_worker_docker_autoscaler_instance = {
# whatever else you have in this input
start_script = file("worker_start_script.sh")
} and inside #!/bin/bash
yum install ec2-instance-connect -y The |
@pysiekytel,
to al2023-ami-ecs-hvm-*-kernel-6.1-x86_64 Everything works as expected now! The workaround with private/public keys is no longer needed. Thanks for the suggestion! |
Server: GitLab EE: v16.11.8-ee
Client: v16.10.0 (Also tried v16.11.3)
Describe the bug
When using Docker Autoscaler executor, Runner Manager is unable to ssh into Worker with error key not found error.
To Reproduce
Steps to reproduce the behavior:
authorized_keys
file hasrunner-worker-key
public-key added, but I believe keypair in the runner manager from which ssh happens is missing)Expected behavior
Runner Manager has to connect to Worker without errors and run the job
The text was updated successfully, but these errors were encountered: