Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Don't start ssh-agent if it's already running #108

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

ojab
Copy link

@ojab ojab commented Feb 8, 2022

This allows to run the action multiple times to add multiple keys
without removing already added ones.

Test run: https://github.com/ojab/ssh-agent/runs/5110810177, both keys are added.

@ojab ojab marked this pull request as ready for review February 8, 2022 14:19
@ojab ojab marked this pull request as draft February 8, 2022 14:21
@ojab ojab force-pushed the patch-1 branch 4 times, most recently from e0146dc to 442121e Compare February 8, 2022 14:37
@ojab ojab marked this pull request as ready for review February 8, 2022 14:38
@ojab
Copy link
Author

ojab commented Feb 8, 2022

@mpdude not sure if it's desired, but it would be welcome addition for composite actions. Also not sure about codestyle, line with ssh-add -l check is a bit long.

@mpdude
Copy link
Member

mpdude commented Feb 18, 2022

You mean that this way, a sub-action (from composite actions) could start or add keys to the running agent without knowing whether the main action started the agent or not?

@ojab
Copy link
Author

ojab commented Feb 18, 2022

Yep! And it wouldn't remove keys that are already added, if ssh-agent is already started before the subaction.
So basically we don't care now if ssh-agent it running or not, we just adding the key and starting ssh-agent if needed instead of replacing current keys with new ones.

@ojab
Copy link
Author

ojab commented Feb 18, 2022

Run ./
  with:
    ssh-private-key: 
  
  
Error: The ssh-private-key argument is empty. Maybe the secret has not been configured, or you are using a wrong secret name in your workflow file.

in CI, looks unrelated

@ojab
Copy link
Author

ojab commented Mar 28, 2022

😿

3 similar comments
@ojab
Copy link
Author

ojab commented May 17, 2022

😿

@ojab
Copy link
Author

ojab commented Jun 1, 2022

😿

@ojab
Copy link
Author

ojab commented Jun 27, 2022

😿

@mpdude
Copy link
Member

mpdude commented Sep 1, 2022

Sorry it took me so long to get back to this.

Two thoughts:

  • Do we really need to run ssh -l to check? Or would it suffice to assume that when SSH_AUTH_SOCK is set, an agent has already been started and can be used?
  • Do you see legitimate use cases where people intentionally start new agent instances, e. g. for a particular git clone operation, and might kill those agents later on? We might break these workflows.

@ojab
Copy link
Author

ojab commented Oct 19, 2022

Ugh, missed you comment and was reminded about this PR by dependabot notification about the new action version, sorry.

  1. We want to ssh -l because socket could be provided in core.getInput('ssh-auth-sock') and IMHO separate flow for default case (i. e. ssh-auth-sock is not set) would complicate things.
  2. It is a breaking change, but I doubt that it would lead to much breakage, I could imaging three situations:
    1. remote is accepting both ssh-keys and behaves differently depending on the key, e. g. we're using multiple github accounts to fetch multple repos and they have access to single repo each
    2. we could be adding too many keys and get https://superuser.com/questions/268776/how-do-i-configure-ssh-so-it-doesnt-try-all-the-identity-files-automatically
    3. we would send keys that wasn't sent before and in theory rogue ssh server could deanonimize our user, see https://github.com/FiloSottile/whoami.filippo.io

Maybe I'm missing something, but IMHO all of these concerns are theoretical and not practical and could be fixed by setting explicit distinct ssh-auth-sock.

@mpdude
Copy link
Member

mpdude commented Oct 19, 2022

What about the two env variables SSH_AUTH_SOCK and SSH_AGENT_PID... these would not be set/changed when an agent is already running.

If we say that a correctly started agent requires those to be set, we could also avoid starting the agent in case both are present.

On the other hand, what if SSH_AUTH_SOCK is already present but different from what is configured in this action?

Regarding the shutdown phase: Would it be correct to terminate the agent even if we did not start it? How could we tell if that was the case?

@ojab
Copy link
Author

ojab commented Oct 19, 2022

const authSock = core.getInput('ssh-auth-sock');
spawnSync(sshAdd, ['-l'], { env: {  SSH_AUTH_SOCK: authSock || process.env.SSH_AUTH_SOCK } })

const sshAgentArgs = (authSock && authSock.length > 0) ? ['-a', authSock] : [];
child_process.execFileSync(sshAgent, sshAgentArgs)

so

  1. if ssh-auth-sock is provided — we're either starting or reusing this agent
  2. If ssh-auth-sock is not provided
    1. If env.SSH_AUTH_SOCK is set — we're either starting or reusing this agent
    2. If env.SSH_AUTH_SOCK is not set — we're starting the agent which chooses random socket

And after we're always setting SSH_AUTH_SOCK and SSH_AGENT_PID to the values provided by the agent that was just started or reused.

Regarding the shutdown phase: Would it be correct to terminate the agent even if we did not start it? How could we tell if that was the case?

AFAICS there is no shutdown phase, so all the agents keep running until GHA job is finished?

@ojab
Copy link
Author

ojab commented Feb 17, 2023

😿

This allows to run the action multiple times to add multiple keys
without removing already added ones.
mpdude added a commit that referenced this pull request Mar 24, 2023
We need to fix the SSH keys shipped with this action:
https://github.blog/2023-03-23-we-updated-our-rsa-ssh-host-key/

But, we have another issue
(#108) with regards to host
keys: On self-hosted runners which are not ephemeral the known_host file
fills up with repeated entries, because every action run adds a new line
with the same host keys.

Also, on those machines, the old key will still be in the `known_hosts`
file.

IMHO this action should not be repsonsible for shipping SSH host keys,
that's too much responsibility.

This section in the code is a leftover from early days when GitHub
provided runners did not include SSH keys at all. For a long time
already, GH takes care of placing their SSH keys in their runner images.

For self-hosted runners, those people setting up the runner should fetch
and verify SSH keys themselves and put it into the `known_hosts` file.

I know this is a breaking change and is going to annoy users. But on the
other hand, there is no better opportunity to drop this feature than
with an emergency-style key revocation as today.

Closes #106, closes #129, closes #169, closes #170, closes #172.
@ojab
Copy link
Author

ojab commented Mar 29, 2023

😿

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants