grains.setval doesn't set grains if the grain is set in /etc/salt/minion. It appears to be that restarting the minion has no effect.
For example:
sudo salt deploy-api-ylm1 grains.setval roles '[deploy-api]'
deploy-api-ylm1:
----------
roles:
- deploy-api
sudo salt deploy-api-ylm1 grains.get roles
deploy-api-ylm1:
The only way to I've been able to get it to work is to SSH into the minion and edit /etc/salt/minion to have:
grains:
env: dev
roles:
- deploy-api
Salt Version:
Salt: 2016.3.0
Dependency Versions:
cffi: 1.6.0
cherrypy: Not Installed
dateutil: 2.2
gitdb: 0.5.4
gitpython: 0.3.2 RC1
ioflo: Not Installed
Jinja2: 2.7.3
libgit2: 0.24.0
libnacl: Not Installed
M2Crypto: Not Installed
Mako: Not Installed
msgpack-pure: Not Installed
msgpack-python: 0.4.2
mysql-python: 1.2.3
pycparser: 2.14
pycrypto: 2.6.1
pygit2: 0.24.0
Python: 2.7.9 (default, Mar 1 2015, 12:57:24)
python-gnupg: Not Installed
PyYAML: 3.11
PyZMQ: 14.4.0
RAET: Not Installed
smmap: 0.8.2
timelib: Not Installed
Tornado: 4.2.1
ZMQ: 4.0.5
System Versions:
dist: debian 8.4
machine: x86_64
release: 3.16.0-4-amd64
system: Linux
version: debian 8.4
@grobinson-blockchain Do you already have the roles grain in the /etc/salt/minion configuration file? I believe grains.setval sets grains in the /etc/salt/grains file which would explain a conflict if roles is already defined in /etc/salt/master and you are trying to add a role to that list it would be added to /etc/salt/grains.
Yep! I have one grain in /etc/salt/minion which is a generic grain used by all instances (this is the AMI that I boot all instances with). I then intended to use the salt master to overwrite the grains for each minion with grains.setval.
yeah i am seeing the same behavior when roles is already set in /etc/salt/minion so there is a conflict between grains being set in /etc/salt/master and /etc/salt/grains and they dont seem to be able to merge. I also don't see the ability to specify to use the /etc/salt/minion configuration file instead of /etc/salt/grains in the code.
@Ch3LL I'm running into this issue while trying to append values to a grain I have defined in /etc/salt/minion across our fleet. Salt minions seem to ignore values in /etc/salt/grains if the grain is defined in the /etc/salt/minion file.
Just curious to see if this has gained any traction. My issue is detailed here => https://gist.github.com/ndobbs/459b910c930eb1dd0261564360c47d86
ZD-1777
I am also seeing this with I do an append to roles on only some minions. grains.append will set it in the file on the file system no problem. However when I do a grains.get roles it won't show up. I have done a clear_cache and still doesn't return. This only happens on certain grains not all.
@acaiafa I was able to work around this by setting a grain in /etc/salt/minion called node_type, this allows me to manipulateroles grain with the grains state. So it seems as long as you do not have the grain defined in the minion config file, all of the grains listed in /etc/salt/grains will be loaded. The documentation refers to this behavior: The content of /etc/salt/grains is ignored if you specify grains in the minion config., but I do not feel this is completely true as it seems to only ignore grains that are defined in the minion instead of all of grains:.
@ndobbs well my problem is if I run that command against 200 machines 4 of them show this behavior while the other 196 work just fine. Which makes absolutely no sense.
@ndobbs
but I do not feel this is completely true as it seems to only ignore grains that are defined in the minion instead of all of grains:
if you have a grain defined in a conf (minion config) and grain defined in /etc/salt/grains, its going to take the conf. however, any other grain defined in /etc/salt/grains will be manipulatable.
is this what you're seeing?
@austinpapp Yes, that's what I'm seeing.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
If this issue is closed prematurely, please leave a comment and we will gladly reopen the issue.