I’m using Ansible 1.5.3 and Git with ssh agent forwarding (https://help.github.com/articles/using-ssh-agent-forwarding). I can log into the server that I am managing with A
There are some very helpful partial answers here, but after running into this issue a number of times, I think an overview would be helpful.
First, you need to make sure that SSH agent forwarding is enabled when connecting from your client running Ansible to the target machine. Even with transport=smart
, SSH agent forwarding may not be automatically enabled, depending on your client's SSH configuration. To ensure that it is, you can update your ~/.ansible.cfg
to include this section:
[ssh_connection]
ssh_args=-o ControlMaster=auto -o ControlPersist=60s -o ControlPath=/tmp/ansible-ssh-%h-%p-%r -o ForwardAgent=yes
Next, you'll likely have to deal with the fact that become: yes
(and become_user: root
) will generally disable agent forwarding because the SSH_AUTH_SOCK
environment variable is reset. (I find it shocking that Ansible seems to assume that people will SSH as root, since that makes any useful auditing impossible.) There are a few ways to deal with this. As of Ansible 2.2, the easiest approach is to preserve the (whole) environment when using sudo
by specifying the -E
flag:
become_flags: "-E"
However, this can have unwanted side-effects by preserving variables like PATH
. The cleanest approach is to only preserve SSH_AUTH_SOCK
by including it in env_keep
in your /etc/sudoers
file:
Defaults env_keep += "SSH_AUTH_SOCK"
To do this with Ansible:
- name: enable SSH forwarding for sudo
lineinfile:
dest: /etc/sudoers
insertafter: '^#?\s*Defaults\s+env_keep\b'
line: 'Defaults env_keep += "SSH_AUTH_SOCK"'
This playbook task is a little more conservative than some of the others suggested, since it adds this after any other default env_keep
settings (or at the end of the file, if none are found), without changing any existing env_keep
settings or assuming SSH_AUTH_SOCK
is already present.
Another answer to your question (with the exception that I am using Ansible 1.9) could be the following:
You may want to check your /etc/ansible/ansible.cfg (or the other three potential locations where config settings can be overridden) for transport=smart
as recommended in the ansible docs. Mine had defaulted to transport=paramiko
at some point during a previous install attempt, preventing my control machine from utilizing OpenSSH, and thus agent forwarding. This is probably a massive edge case, but who knows? It could be you!
Though I didn't find it necessary for my configuration, I should note that others have mentioned that you should add -o ForwardAgent=yes
to your ssh_args setting in the same file like so:
[ssh_connection]
ssh_args=-o ForwardAgent=yes
I only mention it here for the sake of completeness.
The problem is resolved by removing this line from the playbook:
sudo: yes
When sudo is run on the remote host, the environment variables set by ssh during login are no longer available. In particular, SSH_AUTH_SOCK, which "identifies the path of a UNIX-domain socket used to communicate with the agent" is no longer visible so ssh agent forwarding does not work.
Avoiding sudo when you don't need it is one way to work around the problem. Another way is to ensure that SSH_AUTH_SOCK sticks around during your sudo session by creating a sudoers file:
/etc/sudoers:
Defaults env_keep += "SSH_AUTH_SOCK"
To expand on @j.freckle's answer, the ansible way to change sudoers file is:
- name: Add ssh agent line to sudoers
lineinfile:
dest: /etc/sudoers
state: present
regexp: SSH_AUTH_SOCK
line: Defaults env_keep += "SSH_AUTH_SOCK"