2015/08/12 10:42:58 terraform-provisioner-remote-exec: 2015/08/12 10:42:58 connecting to TCP connection for SSH
aws_instance.coreos.0 (remote-exec): Connecting to remote host via SSH...
aws_instance.coreos.0 (remote-exec): Host: 52.8.145.226
aws_instance.coreos.0 (remote-exec): User: core
aws_instance.coreos.0 (remote-exec): Password: false
aws_instance.coreos.0 (remote-exec): Private key: true
aws_instance.coreos.0 (remote-exec): SSH Agent: false
2015/08/12 10:42:58 terraform-provisioner-remote-exec: 2015/08/12 10:42:58 handshaking with SSH
2015/08/12 10:42:59 terraform-provisioner-remote-exec: 2015/08/12 10:42:59 handshake error: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
2015/08/12 10:42:59 terraform-provisioner-remote-exec: 2015/08/12 10:42:59 Retryable error: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
I've tried all combinations of specifying key file and ssh agent to no avail. Connecting manually works just fine. Here's my connect block:
provisioner "remote-exec" {
inline = [
"echo hi"
]
connection {
user = "core"
key_file = "/Users/jchen/.ssh/rousseau" # with or without
agent = false # true/false
}
}
I did see #2614, but given no combination of options worked for me, I figured I'd submit an issue.
Same problem here before and after updating to 0.6.3. Just trying to setup a simple file provisioner on AWS to copy my environment variables across:
provisioner "file" {
source = "env.sh"
destination = "/etc/profile.d/app.sh"
}
And I get the following error:
Error applying plan:
1 error(s) occurred:
* ssh: handshake failed: ssh: unable to authenticate, attempted methods [publickey none], no supported methods remain
I just tried the solutions in https://github.com/hashicorp/terraform/issues/2614 and agent = false fixed this for me.
I just got hit with this...
Shouldn't the docs @ https://terraform.io/docs/provisioners/connection.html#agent
State that this needs to be declared one way or another as a default?
Hi All,
I'm also facing this issue, please suggest.
root@gfs1:/etc/heketi# heketi-cli topology load --json=/etc/heketi/topology.json
Found node gfs1 on cluster 8f207c20e4470125f11b3a98306c98a8
Adding device /dev/vda1 ... Unable to add device: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
Creating node gfs2 ... Unable to create node: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
Thanks
Same here on terraform 0.11.5
agent = false didn't help
Having this issue too on terraform 0.11.5, any help will be very much appreciated
It seems to be present in 0.11.7 as well (for azure), opened new issue https://github.com/hashicorp/terraform/issues/18042
Played around with parameters like timeout, agent etc. but its not clear. I wish the ssh client had better logging than just throwing the "handshake" string (Even with TF_LOG set to TRACE).
Same issue here. I can see it trying to connect to the bastion host and I can ssh from my terminal without issue.
provisioner "chef" {
connection {
type = "ssh"
bastion_host = "${var.bastion_host}"
bastion_user = "${var.bastion_user}"
bastion_host_key = "${var.bastion_host_key}"
user = "${var.ssh_user}"
private_key = "${var.ssh_private_key}"
agent = false
timeout = "10m"
}
...
All my problems went away very exact moment i've moved from europe-west1-c on GCE to different zone.
If it helps somehow
I actually solved this. I switched to bastion_private_key instead of bastion_host_key and got an error message stating that password protected ssh keys were not supported. I used a different ssh key and it all started working. Funny thing is I'm still using the same password protected key as my private_key.
Also I'm dmcfranklin.
I had the same error. I managed to fix it when I changed the permissions of my public key to chmod 644.
I think it's because the agent cannot read the public key when it is on the permissions 600, even though the agent is running as me.
Hello! :robot:
This issue relates to an older version of Terraform that is no longer in active development, and because the area of Terraform it relates to has changed significantly since the issue was opened we suspect that the issue is either fixed or that the circumstances around it have changed enough that we'd need an updated issue report in order to reproduce and address it.
If you're still seeing this or a similar issue in the latest version of Terraform, please do feel free to open a new bug report! Please be sure to include all of the information requested in the template, even if it might seem redundant with the information already shared in _this_ issue, because the internal details relating to this problem are likely to be different in the current version of Terraform.
Thanks!
I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment
Having this issue too on terraform 0.11.5, any help will be very much appreciated