I am trying to implement a custom Vault HTTP client in golang. I've successfully managed to unseal the Vault. However, when I'm trying to enable AppRole I get the following error message in the HTTP response:
{"errors":["node not active but active node not found"]}
The related snippet of code is:
appRoleUrl := BASE_VAULT_URL + "sys/auth/approle"
// enable app role
_request := gorequest.New()
if _, body, errs := _request.Post(appRoleUrl).
Set("X-Vault-Token", rootToken).
Send(`{"type":"approle"}`).End(); hasErrs(errs) {
return errors.New(stringifyErrs(errs))
} else {
fmt.Println("enableAppRoleWithPolicy", body) // <== Here I get the error
return createACLPolicy(rootToken)
}
The URL and the root token is correct! What am I missing?
It seems like there is an issue with your HA setup, could you share your vault configuration?
@briankassouf Thank you for the reply. I have used the vault configuration mentioned in the Deploy Vault page of the Vault's website.
backend "consul" {
address = "127.0.0.1:8500"
path = "vault"
}
listener "tcp" {
address = "127.0.0.1:8200"
tls_disable = 1
}
@briankassouf I tried the same with the Official Golang client for Vault, but I got the same error! :(
@younisshah Check your Vault logs for errors; likely there is some issue being reported as normally if this happens it's a storage backend issue.
@jefferai Thank you for the reply! Here is my log:
2017/04/27 09:55:28.101074 [WARN ] physical/consul: appending trailing forward slash to path
2017/04/27 09:55:39.209758 [INFO ] core: security barrier not initialized
2017/04/27 09:55:39.211047 [INFO ] core: security barrier not initialized
2017/04/27 09:55:39.212196 [INFO ] core: security barrier initialized: shares=5 threshold=3
2017/04/27 09:55:39.239239 [INFO ] core: post-unseal setup starting
2017/04/27 09:55:39.251281 [INFO ] core: loaded wrapping token key
2017/04/27 09:55:39.253667 [INFO ] core: successfully mounted backend: type=generic path=secret/
2017/04/27 09:55:39.253729 [INFO ] core: successfully mounted backend: type=cubbyhole path=cubbyhole/
2017/04/27 09:55:39.253857 [INFO ] core: successfully mounted backend: type=system path=sys/
2017/04/27 09:55:39.253929 [INFO ] rollback: starting rollback manager
2017/04/27 09:55:39.257011 [INFO ] expiration: restoring leases
2017/04/27 09:55:39.260634 [INFO ] core: post-unseal setup complete
2017/04/27 09:55:39.260640 [INFO ] core/startClusterListener: starting listener: listener_address=127.0.0.1:8201
2017/04/27 09:55:39.260879 [INFO ] core/startClusterListener: serving cluster requests: cluster_listen_address=127.0.0.1:8201
2017/04/27 09:55:39.261498 [INFO ] core: root token generated
2017/04/27 09:55:39.261509 [INFO ] core: pre-seal teardown starting
2017/04/27 09:55:39.261520 [INFO ] core: stopping cluster listeners
2017/04/27 09:55:39.261529 [INFO ] core: shutting down forwarding rpc listeners
2017/04/27 09:55:39.261551 [INFO ] core: forwarding rpc listeners stopped
2017/04/27 09:55:39.761119 [INFO ] core: rpc listeners successfully shut down
2017/04/27 09:55:39.761157 [INFO ] core: cluster listeners successfully shut down
2017/04/27 09:55:39.761186 [INFO ] rollback: stopping rollback manager
2017/04/27 09:55:39.761296 [INFO ] core: pre-seal teardown complete
2017/04/27 09:55:39.765777 [INFO ] core: vault is unsealed
2017/04/27 09:55:39.765807 [WARN ] physical/consul: Concurrent sealed state change notify dropped
2017/04/27 09:55:39.765878 [INFO ] core: entering standby mode
2017/04/27 09:55:39.768398 [INFO ] core: acquired lock, enabling active operation
2017/04/27 09:55:39.802179 [WARN ] physical/consul: Concurrent state change notify dropped
2017/04/27 09:55:39.802195 [INFO ] core: post-unseal setup starting
2017/04/27 09:55:39.802517 [INFO ] core: loaded wrapping token key
2017/04/27 09:55:39.803069 [INFO ] core: successfully mounted backend: type=generic path=secret/
2017/04/27 09:55:39.803158 [INFO ] core: successfully mounted backend: type=system path=sys/
2017/04/27 09:55:39.803180 [INFO ] core: successfully mounted backend: type=cubbyhole path=cubbyhole/
2017/04/27 09:55:39.803246 [INFO ] rollback: starting rollback manager
2017/04/27 09:55:39.804869 [INFO ] expiration: restoring leases
2017/04/27 09:55:39.806531 [INFO ] core: post-unseal setup complete
2017/04/27 09:55:39.806538 [INFO ] core/startClusterListener: starting listener: listener_address=127.0.0.1:8201
2017/04/27 09:55:39.806627 [INFO ] core/startClusterListener: serving cluster requests: cluster_listen_address=127.0.0.1:8201
Can you see any problem with the Vault? There is this line 2017/04/27 09:55:39.765878 [INFO ] core: entering standby mode. Why is it entering the standby mode? or this is normal?
I start my vault as vault server -config=vault-conf.hcl and consul agent as consul agent -server -bootstrap-expect 1 -data-dir ./consul -bind 127.0.0.1
Appreciated!
Hi,
Can you provide the _full_ log? (It outputs some information at the beginning that would be useful.) It would also be good if you can get it at trace level.
Also, what is your configuration?
@younisshah I am running Vault with a Consul storage backend and I experienced the same thing when running some shell scripts that interact with Vault. The logs were mostly unhelpful, but in my specific case, I was starting Consul, then immediately starting Vault, then immediately running the shell scripts against Vault. I believe the problem was caused by the fact that I started the Consul process but started Vault so quickly that Consul was not actually accepting connections yet, so it failed. The same thing applied when running the shell scripts calling the Vault client immediately after starting the Vault server.
I'm not sure the best way to determine when Consul is accepting connections and when Vault is successfully connected to Consul, maybe @jefferai could shed some light on it? Any chance they support something like systemd's notify service type?
I was able to fix this by putting a 10s delay between starting Consul and starting Vault, and another 10s delay between starting Vault and calling my scripts.
Closing due to lack of response.
Our cluster was getting this message today also: node not active but active node not found. We restarted the consul agent on all the nodes in the cluster and vault was then able to elect a leader.
I do not have tls enabled but stilll seeing this error
local node not active but active cluster node not found
Most helpful comment
Our cluster was getting this message today also:
node not active but active node not found. We restarted the consul agent on all the nodes in the cluster and vault was then able to elect a leader.